Re: Realtime CDC demo

2024-05-20 Thread Maksim Timonin
Hi!

Apache Ignite now provides the `CdcManager` that captures changes in WAL
segments within the Ignite node. Actually, Ignite doesn't provide a
complete implementation of the CDC mode, but provides an opportunity to
implement it. Please check the docs in the `CdcManager` interface.

I suppose there are few possible implementations for the interface. Actual
implementation depends on the security requirements for the environment,
limiting the influence of the CDC on the operation of the node, etc. AFAIK,
currently there is no implementation to be open source.

Maksim


Re: [ANNOUNCE] Apache Ignite 2.14.0 Released

2023-07-10 Thread Maksim Timonin
Hello!

The Ignite team draws attention to the changed default behavior of Java
thin clients since Ignite release 2.14.

Before release 2.14, the default value for the flag
BinaryConfiguration#compactFooter differed on Java thin clients (false) and
client/server nodes (true). For details about the flag, check out this doc
[1]. This difference has an unexpected side effect for caches that use POJO
as cache key. A key inserted from the thin client (with the default
BinaryConfiguration) is not visible from client or server nodes, and vice
versa. It means `get(PojoKey)` started on the server returns NULL for the
`PojoKey` inserted by the thin client.

This behavior was changed in release 2.14. Now the Java thin client
retrieves BinaryConfiguration from the server and overrides the configured
or default one. It fixes the side effect described above for new clients.
But it might affect users who, at the same time, rely on the default
BinaryConfiguration and use POJO as cache key in such cases:
1. Persistence is enabled, and the storage is filled by an older Java thin
client (e.g., 2.13). After upgrading the thin client to 2.14, it will not
observe existing keys.
2. Different versions of Java thin clients are used simultaneously (e.g.,
2.13 and 2.14). Keys inserted by one client will not be visible to another.

If this change affects you, there are two possible solutions:
1. Align configurations on server and thin clients: set
`BinaryConfiguration#compactFooter = false` on server nodes. You must be
confident that all data in the persistent storage is inserted only by thin
clients.
2. Fallback to the previous behavior: set
`ClientConfiguration#setAutoBinaryConfigurationEnabled = false` on Java
thin clients.

[1]
https://ignite.apache.org/docs/latest/binary-client-protocol/data-format#schema

On Wed, Oct 12, 2022 at 4:10 PM Pavel Tupitsyn  wrote:

> A small post about .NET changes:
> https://ptupitsyn.github.io/Whats-New-In-Ignite-Net-2.14/
>
> Cheers!
>
> On Tue, Oct 11, 2022 at 10:56 AM Taras Ledkov  wrote:
>
> > The Apache Ignite Community is pleased to announce the release of
> > Apache Ignite 2.14.0.
> >
> > Apache Ignite® is an in-memory computing platform for transactional,
> > analytical, and streaming workloads delivering in-memory speeds at
> > a petabyte scale.
> > https://ignite.apache.org
> >
> > For the full list of changes, you can refer to the RELEASE_NOTES list
> > which is trying to catalogue the most significant improvements for
> > this version of the platform.
> > https://ignite.apache.org/releases/2.14.0/release_notes.html
> >
> > Download the latest Ignite version from here:
> > https://ignite.apache.org/download.cgi
> >
> > Please let us know if you encounter any problems:
> > https://ignite.apache.org/community/resources.html#ask
> >
> > --
> > With best regards,
> > Taras Ledkov
> >
>


Re: Apache Ignite 3.0 Questions

2023-05-26 Thread Maksim Timonin
Hi David,

 2) What is the end of life matrix or end of support of 2.0?


There is a stable community of great engineers that improve and support
Ignite 2. Support of Ignite 2 will not be terminated in the foreseeable
future.

On Fri, May 26, 2023 at 1:04 PM David Bucek  wrote:

> Hello,
>
> we are considering to use Apache Ignite for our software and we have
> couple of questions:
>
> 1) When will Apache Ignite 3.0 be the stable version?
> 2) What is the end of life matrix or end of support of 2.0?
> 3) Will the 3.0 version still be embeddable to the Java application? (very
> useful for us).
>
> Thank you for answers
>
>
> David Bucek | Software Architect | MANTA
>
> *This email is intended solely for the addressee(s) and all its contents,
> including all attachments and files transmitted with it, represent
> confidential information. Unauthorized distribution, modification or
> disclosure of its contents and unauthorized reliance on its contents are
> prohibited. If you have received this email in error, please notify the
> sender immediately by return email. Please then delete the email from your
> system and do not (i) copy or distribute it, (ii) rely on its contents, or
> (iii) disclose its contents to any person.*


Re: How to handle missed indexes and affinity keys?

2022-12-19 Thread Maksim Timonin
Hi Roza!

>> We have hypothesis that somehow these corrupted caches were created by
Key-Value API, not SQL.

Am I correct that you first create a cache "PUBLIC_ProductFeatures", and
then create a table over the existing cache by invoking the DDL query?
Could you please share some additional info that can help to reproduce your
case? At least:
1. Ignite version on which cache and table were created.
2. CacheConfiguration for the cache if it was created before.
3. Classes specified in KEY_TYPE and VALUE_TYPE.
4. The SQL query you run and the "explain" plan for it.

>> The more important question - is there any way to rebuild index and add
affinity key back?

Affinity can be set only during cache creation. Please check the docs if
you create a cache before table:
https://ignite.apache.org/docs/2.14.0/data-modeling/affinity-collocation#configuring-affinity-key

Thanks,
Maksim


On Mon, Dec 19, 2022 at 3:55 PM Айсина Роза Мунеровна <
roza.ays...@sbermarket.ru> wrote:

> Hi Maksim!
>
> The problem is that simple SELECT query runs in ~20min - this index does
> not work.
>
> More over, other (not corrupted) tables with affinity key == primary key
> have index by concrete column, not *_KEY*, and have specified affinity
> key - see my first message with example.
>
> We have hypothesis that somehow these corrupted caches were created by
> Key-Value API, not SQL. Otherwise how specified indexes and affinity keys
> were skipped in DDL while creating the caches?
>
> The more important question - is there any way to rebuild index and add
> affinity key back?
>
> Thanks!
>
> On 16 Dec 2022, at 4:30 PM, Maksim Timonin 
> wrote:
>
> Hi Roza,
>
> In this ddl primary key (product_sku) equals the affinity key
> (product_sku). In such cases Ignite skips creating an additional index
> because _key_PK index already covers primary key.
>
> Thanks,
> Maksim
>
> On Fri, Dec 16, 2022 at 2:06 PM Айсина Роза Мунеровна <
> roza.ays...@sbermarket.ru> wrote:
>
>> Hello Stephen!
>>
>> This DDL we use:
>>
>> CREATE TABLE IF NOT EXISTS PUBLIC.ProductFeatures
>> (
>> product_sku INT PRIMARY KEY,
>> total_cnt_orders_with_sku INT
>> )
>> WITH "CACHE_NAME=PUBLIC_ProductFeatures,
>> KEY_TYPE=io.sbmt.ProductFeaturesKey,
>> VALUE_TYPE=io.sbmt.ProductFeaturesValue, AFFINITY_KEY=product_sku,
>> TEMPLATE=PARTITIONED, BACKUPS=1
>>
>> And all tables are created similarly.
>>
>> On 16 Dec 2022, at 1:03 PM, Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>> Внимание: Внешний отправитель!
>> Если вы не знаете отправителя - не открывайте вложения, не переходите по
>> ссылкам, не пересылайте письмо!
>>
>> What are the CREATE TABLE  commands for those tables?
>>
>> On 16 Dec 2022, at 09:39, Айсина Роза Мунеровна <
>> roza.ays...@sbermarket.ru> wrote:
>>
>> Hola!
>>
>> We've discovered some strange behaviour in Ignite cluster and now we are
>> trying to understand how to recover from this state.
>>
>> So we have 5 node cluster with persistence and all caches either
>> replicated or partitioned with affinity key.
>> All caches are created via DDL with CREATE TABLE IF NOT EXISTS statements
>> in one regular job (once per day).
>>
>> The problem is that we hit Query execution is too long warning.
>> After some debug we found out that some tables have missed indexes and
>> affinity keys.
>> More precisely - corrupted tables have indexes not by exact column name
>> but for _KEY column.
>> And no affinity key at all.
>>
>> select
>>   TABLE_NAME,
>>   INDEX_NAME,
>>   COLUMNS
>> from SYS.INDEXES
>> where TABLE_NAME = ‘PRODUCTFEATURES’ — broken table
>>   or TABLE_NAME = ‘USERFEATURESDISCOUNT’ — healthy table
>> ;
>>
>> Result:
>> ++++
>> |TABLE_NAME  |INDEX_NAME  |COLUMNS |
>> ++++
>> |USERFEATURESDISCOUNT|_key_PK_hash|"USER_ID" ASC, "USER_ID" ASC|
>> |USERFEATURESDISCOUNT|__SCAN_ |null|
>> |USERFEATURESDISCOUNT|_key_PK |"USER_ID" ASC   |
>> |USERFEATURESDISCOUNT|AFFINITY_KEY|"USER_ID" ASC   |
>> |PRODUCTFEATURES |_key_PK_hash|"_KEY" ASC  |
>> |PRODUCTFEATURES |__SCAN_ |null|
>> |PRODUCTFEATURES |_key_PK |"_KEY" ASC  |
>> ++-

Re: How to handle missed indexes and affinity keys?

2022-12-16 Thread Maksim Timonin
Hi Roza,

In this ddl primary key (product_sku) equals the affinity key
(product_sku). In such cases Ignite skips creating an additional index
because _key_PK index already covers primary key.

Thanks,
Maksim

On Fri, Dec 16, 2022 at 2:06 PM Айсина Роза Мунеровна <
roza.ays...@sbermarket.ru> wrote:

> Hello Stephen!
>
> This DDL we use:
>
> CREATE TABLE IF NOT EXISTS PUBLIC.ProductFeatures
> (
> product_sku INT PRIMARY KEY,
> total_cnt_orders_with_sku INT
> )
> WITH "CACHE_NAME=PUBLIC_ProductFeatures,
> KEY_TYPE=io.sbmt.ProductFeaturesKey,
> VALUE_TYPE=io.sbmt.ProductFeaturesValue, AFFINITY_KEY=product_sku,
> TEMPLATE=PARTITIONED, BACKUPS=1
>
> And all tables are created similarly.
>
> On 16 Dec 2022, at 1:03 PM, Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
> Внимание: Внешний отправитель!
> Если вы не знаете отправителя - не открывайте вложения, не переходите по
> ссылкам, не пересылайте письмо!
>
> What are the CREATE TABLE  commands for those tables?
>
> On 16 Dec 2022, at 09:39, Айсина Роза Мунеровна 
> wrote:
>
> Hola!
>
> We've discovered some strange behaviour in Ignite cluster and now we are
> trying to understand how to recover from this state.
>
> So we have 5 node cluster with persistence and all caches either
> replicated or partitioned with affinity key.
> All caches are created via DDL with CREATE TABLE IF NOT EXISTS statements
> in one regular job (once per day).
>
> The problem is that we hit Query execution is too long warning.
> After some debug we found out that some tables have missed indexes and
> affinity keys.
> More precisely - corrupted tables have indexes not by exact column name
> but for _KEY column.
> And no affinity key at all.
>
> select
>   TABLE_NAME,
>   INDEX_NAME,
>   COLUMNS
> from SYS.INDEXES
> where TABLE_NAME = ‘PRODUCTFEATURES’ — broken table
>   or TABLE_NAME = ‘USERFEATURESDISCOUNT’ — healthy table
> ;
>
> Result:
> ++++
> |TABLE_NAME  |INDEX_NAME  |COLUMNS |
> ++++
> |USERFEATURESDISCOUNT|_key_PK_hash|"USER_ID" ASC, "USER_ID" ASC|
> |USERFEATURESDISCOUNT|__SCAN_ |null|
> |USERFEATURESDISCOUNT|_key_PK |"USER_ID" ASC   |
> |USERFEATURESDISCOUNT|AFFINITY_KEY|"USER_ID" ASC   |
> |PRODUCTFEATURES |_key_PK_hash|"_KEY" ASC  |
> |PRODUCTFEATURES |__SCAN_ |null|
> |PRODUCTFEATURES |_key_PK |"_KEY" ASC  |
> ++++
>
>
> Query execution even with simplest statements with filters on primary
> and affinity keys takes ~20min in best case.
> We have 8 tables, 5 out 8 are corrupted.
>
> So the questions are:
> 1. What can probably cause such state?
> 2. Is there any way to recover without full delete-refill tables? I see
> that index can be created via CREATE INDEX, but affinity key can be created
> only via CREATE TABLE statement?
>
> Thanks in advance!
> *--*
>
> *Роза Айсина*
> Старший разработчик ПО
> *СберМаркет* | Доставка из любимых магазинов
>
>
> Email: roza.ays...@sbermarket.ru
> Mob:
> Web: sbermarket.ru
> App: iOS
> 
> и Android
> 
>
>
>
>
>
>
> *УВЕДОМЛЕНИЕ О КОНФИДЕНЦИАЛЬНОСТИ:* это электронное сообщение и любые
> документы, приложенные к нему, содержат конфиденциальную информацию.
> Настоящим уведомляем Вас о том, что, если это сообщение не предназначено
> Вам, использование, копирование, распространение информации, содержащейся в
> настоящем сообщении, а также осуществление любых действий на основе этой
> информации, строго запрещено. Если Вы получили это сообщение по ошибке,
> пожалуйста, сообщите об этом отправителю по электронной почте и удалите это
> сообщение.
> *CONFIDENTIALITY NOTICE:* This email and any files attached to it are
> confidential. If you are not the intended recipient you are notified that
> using, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited. If you have received
> this email in error please notify the sender and delete this email.
>
>
>
> *--*
>
> *Роза Айсина*
>
> Старший разработчик ПО
>
> *СберМаркет* | Доставка из любимых магазинов
>
>
>
> Email: roza.ays...@sbermarket.ru
>
> Mob:
>
> Web: sbermarket.ru
>
> App: iOS
> 
> и Android
> 
>
>
>
> *УВЕДОМЛЕНИЕ О 

Re: Error on Ignite.close(): class java.io.ObjectInputStream$Caches$1 cannot be cast to class java.util.Map

2022-08-15 Thread Maksim Timonin
Hi Rafael,

Getting `java.lang.ClassCastException` looks like a bug. I'm not sure but
it's required to have different cleaning for jdk8 and jdk11. Thanks for
finding that!

> Do you have a guideline "howto contribute" if this is wanted?

Please check this guide:
https://github.com/apache/ignite/blob/master/CONTRIBUTING.md

Feel free to ask any questions.



On Tue, Aug 9, 2022 at 4:20 PM Rafael Troilo 
wrote:

> Hi,
>
> @Paolo de Dios, thank you for creating an issue for this
> - https://issues.apache.org/jira/browse/IGNITE-17481
>
> I attached a patch for this issue.
>
> Do you have a guideline "howto contribute" if this is wanted?
>
> Best,
> Rafael
>
>
>
> On 8/5/22 16:18, Rafael Troilo wrote:
> > Hi,
> >
> > in case it wasn't reported before.
> >
> > On Ignite.close we got an Error:
> >
> >
> > ```
> > SEVERE: Failed to stop component (ignoring): GridManagerAdapter
> [enabled=true, name=o.a.i.i.managers.deployment.GridDeploymentManager]
> > java.lang.ClassCastException:  (java.io.ObjectInputStream$Caches$1
> and java.util.Map are in module java.base of loader 'bootstrap')
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentStoreAdapter.clearSerializationCache(GridDeploymentStoreAdapter.java:151)
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentStoreAdapter.clearSerializationCaches(GridDeploymentStoreAdapter.java:120)
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore.undeploy(GridDeploymentLocalStore.java:565)
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore.stop(GridDeploymentLocalStore.java:101)
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentManager.storesStop(GridDeploymentManager.java:630)
> >  at
> org.apache.ignite.internal.managers.deployment.GridDeploymentManager.stop(GridDeploymentManager.java:137)
> >  at
> org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:1928)
> >  at
> org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:1806)
> >  at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2382)
> >  at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2205)
> >  at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:350)
> >  at org.apache.ignite.Ignition.stop(Ignition.java:230)
> >  at
> org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:2776)
> > ```
> >
> > ver. 2.13.0#20220420-sha1:551f6ece
> > OS: Linux 4.15.0-189-generic amd64
> > VM information: OpenJDK Runtime Environment
> 11.0.16+8-post-Ubuntu-0ubuntu118.04 Ubuntu OpenJDK 64-Bit Server VM
> 11.0.16+8-post-Ubuntu-0ubuntu118.04
> >
> > The reason for this Exception is an access to an field of
> ObjectOutputStream$Caches.subclassAudits which used to be of type
> java.util.Map but it changed to type java.lang.ClassValue!
> >
> > ```
> >
> org.apache.ignite.internal.managers.deployment.GridDeploymentStoreAdapter::clearSerializationCaches
> >
> >
> > clearSerializationCache(Class.forName("java.io.ObjectInputStream$Caches"),
> "subclassAudits");
> >
> > clearSerializationCache(Class.forName("java.io.ObjectOutputStream$Caches"),
> "subclassAudits");
> >
> > clearSerializationCache(Class.forName("java.io.ObjectStreamClass$Caches"),
> "localDescs");
> >
> > clearSerializationCache(Class.forName("java.io.ObjectStreamClass$Caches"),
> "reflectors");
> > ```
> >
> > Is it safe to ignore this Exception? Any workarounds?
> >
> > Thank you,
> > Best,
> > Rafael
> >
> >
> >
>
> --
> Rafael Troilo
> HeiGIT gGmbH
> Heidelberg Institute for Geoinformation Technology at Heidelberg University
>
> https://heigit.org | rafael.tro...@heigit.org | phone +49-6221-533 484
>
> Postal address: Schloss-Wolfsbrunnenweg 33 | 69118 Heidelberg | Germany
> Offices: Berliner Str. 45 | 69120 Heidelberg | Germany
>
> Amtsgericht Mannheim | HRB 733765
> Managing Directors: Prof. Dr. Alexander Zipf | Dr. Gesa Schönberger


Re: IndexQuery with Transformer!

2022-08-01 Thread Maksim Timonin
Hi Rafael,

Thanks for your interest. Currently IndexQuery doesn't
support transformers, but I think we can try to support this as IndexQuery
uses some infrastructure of ScanQuery. I've created a ticket [1] to support
this. I will have a look during this month.

Is `Cache.Entry::getKey` a single transformer function you actually use?

> And another question: does IndexQuery support local queries
(.setLocal(true))?

Yes, it does.


[1] https://issues.apache.org/jira/browse/IGNITE-17447

On Sat, Jul 30, 2022 at 1:36 PM Troilo, Rafael 
wrote:

> Hi,
>
> we were looking to use  the IndexQuery instead of ScanQuery with filter,
> to speed up our queries.
> But unfortunately it seems that IndexQuery are not a replacement for
> ScanQuery with filter (if index exist on filter criteria!),
> as IndexQuery does not support transformers.
>
> Exception in thread "main" java.lang.UnsupportedOperationException:
> Transformers are supported only for SCAN queries.
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:871)
>
> Any ideas how we can make use of indices but  still be able to use
> transformers or is  it  planned for further releases, this would be awesome.
> Here is an example what we want to use:
>
> ```java
> var personEntity = new QueryEntity(Integer.class, Person.class)
> .addQueryField("orgId", Integer.class.getName(), null)
> .addQueryField("salary", Integer.class.getName(), null)
> .setIndexes(List.of(
> new QueryIndex("salary", QueryIndexType.SORTED)));
>
> var ccfg = new CacheConfiguration("entityCache")
> .setQueryEntities(List.of(personEntity));
>
> var cache = ignite.getOrCreateCache(ccfg);
>
> // Get only keys for persons earning more than 1,000.
> List keys;
>
> // scan query
> keys = cache.query(new ScanQuery<>(
> (k, p) -> p.getSalary() > 1000),
> (IgniteClosure, Integer>)
> Cache.Entry::getKey).getAll();
>
> // index query
> keys = cache.query(new IndexQuery(Person.class)
> .setCriteria(gt("salery", 1000)),
> (IgniteClosure, Integer>)
> Cache.Entry::getKey).getAll();
> ```
>
> And another question: does IndexQuery support local queries
> (.setLocal(true))?
>
> Thank you and looking forward to  hear your opinions.
>
> Best,
> Rafael
>
>
>
> --
> Rafael Troilo
> HeiGIT gGmbH
> Heidelberg Institute for Geoinformation Technology at Heidelberg University
>
> https://heigit.org | rafael.tro...@heigit.org | phone +49-6221-533 484
>
> Postal address: Schloss-Wolfsbrunnenweg 33 | 69118 Heidelberg | Germany
> Offices: Berliner Str. 45 | 69120 Heidelberg | Germany
>
> Amtsgericht Mannheim | HRB 733765
> Managing Directors: Prof. Dr. Alexander Zipf | Dr. Gesa Schönberger
>


Re: Re: ignite client can not reconnect to ignite Kubernetes cluster,after pod restart

2022-07-05 Thread Maksim Timonin
Hi guys!

I checked the PR and I think we should improve it a little. With this
change `channelsInit` starts in every user thread in case of changed
topology. This method invokes `initChannelHolders` that is synchronized.

Before this change:
1. In case of channel failure the user thread tries to start operation in
the default channel in case of channel failure.

After this change:
1. User thread just hangs and then does useless work (as the channels
preparation will be done in another thread with #onTopologyChanged).
2. After this work it still uses the old logic and uses the default channel
instead. Even after successful update of channels.

My proposal: provide more restrictions to start `channelsInit` in
`onChannelFailure`. I think we should:
1. check that addresses were completely changed and don't have any
intersections with the known addresses.
2. check that this process wasn't concurrently started, see
`scheduledChannelsReinit`.

WDYT?



On Tue, Jun 28, 2022 at 3:04 PM Pavel Tupitsyn  wrote:

> The fix has been merged.
> Thanks for the contribution, wkhapy123!
>
> On Fri, Jun 24, 2022 at 8:57 AM Pavel Tupitsyn 
> wrote:
>
>> Please request contributor permissions on d...@ignite.apache.org
>>
>> On Fri, Jun 24, 2022 at 6:53 AM wkhapy...@gmail.com 
>> wrote:
>>
>>> Hi,
>>>
>>> I want to contribute to Apache ignite .
>>>
>>> Would you please give me the contributor permission?
>>> below is my account
>>> --
>>> wkhapy...@gmail.com
>>>
>>>
>>> *From:* wkhapy...@gmail.com
>>> *Date:* 2022-06-23 22:34
>>> *To:* Pavel Tupitsyn 
>>> *Subject:* Re: ignite client can not reconnect to ignite Kubernetes
>>> cluster,after pod restart
>>> thank you!i will follow  wiki.
>>> ---Original---
>>> *From:* "Pavel Tupitsyn"
>>> *Date:* Thu, Jun 23, 2022 22:25 PM
>>> *To:* "wkhapy123";"user";
>>> *Subject:* Re: ignite client can not reconnect to ignite Kubernetes
>>> cluster,after pod restart
>>>
>>> > I want to fix this bug . I think it is good opportunity to study
>>> ignite.
>>>
>>> Great! Please go ahead.
>>> Make sure to check our wiki, register as a contributor, and assign the
>>> ticket to yourself.
>>>
>>>
>>> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=177047163#content/view/177047163
>>>
>>>
>>> On Thu, Jun 23, 2022, 17:02 wkhapy...@gmail.com 
>>> wrote:
>>>
>>>>
>>>> I want to fix this bug . I think it is good opportunity to study ignite.
>>>> ---Original---
>>>> *From:* "wkhapy...@gmail.com"
>>>> *Date:* Thu, Jun 23, 2022 21:41 PM
>>>> *To:* "Pavel Tupitsyn";
>>>> *Subject:* Re: ignite client can not reconnect to ignite Kubernetes
>>>> cluster,after pod restart
>>>>
>>>>  I am interested in ignite
>>>>
>>>> ---Original---
>>>> *From:* "wkhapy...@gmail.com"
>>>> *Date:* Thu, Jun 23, 2022 21:37 PM
>>>> *To:* "Pavel Tupitsyn";
>>>> *Subject:* Re: ignite client can not reconnect to ignite Kubernetes
>>>> cluster,after pod restart
>>>>
>>>> can I repair it
>>>>
>>>> ---Original---
>>>> *From:* "Pavel Tupitsyn"
>>>> *Date:* Thu, Jun 23, 2022 20:17 PM
>>>> *To:* "user";
>>>> *Subject:* Re: Re: ignite client can not reconnect to ignite
>>>> Kubernetes cluster,after pod restart
>>>>
>>>> It is a bug - addresses are not reloaded from AddressFinder on
>>>> connection loss, so we still try old pod address and fail:
>>>> https://issues.apache.org/jira/browse/IGNITE-17217
>>>>
>>>> Thanks for reporting this.
>>>>
>>>> On Thu, Jun 23, 2022 at 3:00 PM wkhapy...@gmail.com <
>>>> wkhapy...@gmail.com> wrote:
>>>>
>>>>> as you can see,address is 104
>>>>> but addressFinder.getAddress new  ip is 87,and retrylimit is 5 (i set)
>>>>> --
>>>>> wkhapy...@gmail.com
>>>>>
>>>>>
>>>>> *From:* wkhapy...@gmail.com
>>>>> *Date:* 2022-06-23 16:02
>>>>> *To:* Maksim Timonin 
>>>>> *Subject:* Re: Re: ignite client can not reconnect to ignite
>>>>> Kubernetes cluster,after pod restart
>>>>&g

Re: ignite client can not reconnect to ignite Kubernetes cluster,after pod restart

2022-06-22 Thread Maksim Timonin
Hi,

Please, try to use `ClientConfiguration#setRetryLimit` additionally to
`ClientRetryAllPolicy`.
It should help you. Please let me know if it solves the issue or not.

Thanks!


On Wed, Jun 22, 2022 at 8:02 AM Ilya Korol  wrote:

> Hi,
>
> Please take look to
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/client/ClientAddressFinder.html,
>
> according to this ThinClientKubernetesAddressFinder should refresh address
> list on client connection failure, or you can try to set *paritionAwareness
> = true* in *ClientConfiguration*, that should force ip finder to
> proactively refresh address list.
>
> On 2022/06/22 01:53:38 f cad wrote:
> > below if client code config
> > KubernetesConnectionConfiguration kcfg = new
> > KubernetesConnectionConfiguration();
> >
> >
> kcfg.setNamespace(igniteK8sNameSpace);kcfg.setServiceName(igniteK8sServiceName);cfg.setAddressesFinder(new
> > ThinClientKubernetesAddressFinder(kcfg));cfg.setRetryPolicy(new
> > ClientRetryAllPolicy());
> >
> >
> > after ignite pod restart
> >
> > client throw Exceptionorg.apache.ignite.client.ClientConnectionException:
> > Connection timed out
> > at
> org.apache.ignite.internal.client.thin.io.gridnioserver.GridNioClientConnectionMultiplexer.open(GridNioClientConnectionMultiplexer.java:144)
> > at
> org.apache.ignite.internal.client.thin.TcpClientChannel.(TcpClientChannel.java:178)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:917)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:898)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.access$200(ReliableChannel.java:847)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:759)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:731)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:167)
> > at
> org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:288)
> > at
> org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:185)
> >
> > and i use retry to reconnect and print
> > clientConfiguration.getAddressesFinder().getAddresses() and it address is
> > pod address,but client not reconnect
> >
> > while (retryTimeTmp < retryTimes) {
> > try {
> > return igniteClient.getOrCreateCache(new
> > ClientCacheConfiguration()
> > .setName(cacheName)
> > .setAtomicityMode(TRANSACTIONAL)
> > .setCacheMode(PARTITIONED)
> > .setBackups(2)
> > .setWriteSynchronizationMode(PRIMARY_SYNC));
> > }catch (Exception e) {
> > LOGGER.error("get cache [{}] not success", cacheName, e);
> > LOGGER.error("get address info [{}], ipfinder [{}]",
> > clientConfiguration.getAddresses(),
> > clientConfiguration.getAddressesFinder().getAddresses());
> >
> > retrySleep();
> > } finally {
> > retryTimeTmp++;
> > }
> >
>


Re: Query default Index in Ignite Sql cache

2022-06-22 Thread Maksim Timonin
Hi,

> 1. Department object has an id and affinity key on the name(it
doesn't make sense in the real world). Does ignite create an index for
affinity keys as well ? I don't see the affinity key index in the list
below.

You need to configure affinity key with annotations a little bit
differently, please check docs [1]: AffinityKeyMapped must be part of cache
key, not value object. AffinityKeyMapped doesn't affect if it is used for
field in cache value.
So, it should look like

DepartmentKey {
@QuerySqlField
@AffinityKeyMapped
private final String deptName;
...
}

and then cache will be like IgniteCache.

> 2. When I use the default index explicitly in IndexQuery, it throws an
exception that the index name doesn't exist.

Which index name did you use?

> So by default it uses primary key index or hash index when* IndexQuery* is
executed with _KEY in criteria* ?*

They PrimaryKey index executes if criteria consist of single _KEY criteria.

[1]
https://ignite.apache.org/docs/2.11.1/data-modeling/affinity-collocation#configuring-affinity-key

On Wed, Jun 22, 2022 at 9:49 PM Surinder Mehra  wrote:

> Thanks for the reply. I have another question on the affinity key index(if
> any). When we enable sql on ignite, it by default creates an index on the
> primary key(highlighted in red below). We have created a custom index on
> deptId(highlighted in red).
>
> 1. Department object has an id and affinity key on the name(it
> doesn't make sense in the real world). Does ignite create an index for
> affinity keys as well ? I don't see the affinity key index in the list
> below.
> 2. When I use the default index explicitly in IndexQuery, it throws an
> exception that the index name doesn't exist. So by default it uses primary
> key index or hash index when* IndexQuery* is executed with _KEY in
> criteria* ?*
>
> [image: image.png]
> [image: image.png]
>
> On Wed, Jun 22, 2022 at 9:35 PM Николай Ижиков 
> wrote:
>
>> SELECT * FROM SYS.INDEXES
>>
>> 22 июня 2022 г., в 18:38, Surinder Mehra  написал(а):
>>
>> Hi,
>> We have defined indexes on sql enabled ignite cache and are able to see
>> indexes being used while querying.
>> sqline *!indexes* also shows those indexes in output. But we cant see
>> default index created by ignite on *primary key *and *affinity key*.
>> We would like to use index on key and affinity key to get results sorted
>> accordingly.
>>
>> For example, in official docs link below, Is it possible to use the above
>> mentioned default indexes instead of ORG_SALARY_IDX ?
>> How can we get more details about these indexes which are not shown in
>> sqlline commands.
>>
>>
>>
>> https://ignite.apache.org/docs/latest/key-value-api/using-cache-queries#executing-index-queries
>>
>>
>> QueryCursor> cursor = cache.query(
>> new IndexQuery(Person.class, "ORG_SALARY_IDX")
>> .setCriteria(eq("orgId", 1), gt("salary", 1000)));
>>
>>
>>


Re: How to change JMX port in XML Ignite Configuration?

2022-06-06 Thread Maksim Timonin
Hi Rose,

There is a guide [1] on how to configure JMX for Ignite, please check the
setting "com.sun.management.jmxremote.port". It should work.

[1]
https://ignite.apache.org/docs/2.13.0/monitoring-metrics/metrics#enabling-jmx-for-ignite


Maksim

On Fri, Jun 3, 2022 at 4:54 PM Айсина Роза  wrote:

> Hola!
>
>
> I cannot find in official documentation how to change value of JMX port in
> XML Ignite configuration.
>
> Its default value is 49112.
>
>
> Reason for changing it is that I need to parametrize helm chart for Ignite.
>
>
> Thanks in advance!
>
>
> Best regards,
> Rose.
> --
>
> Информация данного сообщения содержит коммерческую тайну Общества с
> ограниченной ответственностью «ГПМ Дата», ОГРН 1207700499863 (далее – ООО
> «ГПМ Дата») и предназначена только для лиц, которым адресовано данное
> сообщение. Если Вы получили данное сообщение по ошибке, просим Вас удалить
> его и не использовать полученную информацию, составляющую коммерческую
> тайну ООО «ГПМ Дата».
>
> В соответствии с действующим законодательством Российской Федерации ООО
> «ГПМ Дата» вправе требовать от лиц, получивших доступ к информации,
> составляющей коммерческую тайну, в результате действий, совершенных
> случайно или по ошибке, охраны конфиденциальности этой информации.
>
> Настоящее сообщение не является вступлением в переговоры о заключении
> каких-либо договоров или соглашений, не свидетельствует о каком-либо
> безусловном намерении ООО «ГПМ Дата» заключить какой-либо договор или
> соглашение, не является заверением об обстоятельствах, которые должны быть
> доведены до сведения другой стороны. Настоящее сообщение не является
> офертой, акцептом оферты, равно как не является предварительным соглашением
> и носит исключительно информационный и необязывающий характер. ООО «ГПМ
> Дата» оставляет за собой право на прекращение настоящей переписки в любое
> время.
>


Re: What is org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:

2022-02-25 Thread Maksim Timonin
Hi!

I wrote a test with your DDL and query, and it works for me. I need some
more time to dig into it..

Could you also, please, provide a way (with an example) how you insert data
to the table?

Thanks,
Maksim

On Fri, Feb 25, 2022 at 9:32 PM John Smith  wrote:

> Hi Maksim did you look into this?
>
> On Tue., Feb. 22, 2022, 9:51 a.m. John Smith, 
> wrote:
>
>> Hi. This is it.
>>
>> create table if not exists car_code (
>> provider_id int,
>> car_id int,
>> car_code varchar(16),
>> primary key (provider_id, car_id)
>> ) with "template=replicatedTpl, key_type=CarCodeKey, value_type=CarCode";
>>
>> select
>> car_id
>> from car_code
>> where
>> provider_id = ? and
>> car_code = ?
>> order by car_id asc limit 1;
>>
>> On Mon, Feb 21, 2022 at 3:24 AM Maksim Timonin 
>> wrote:
>>
>>> Hi,
>>>
>>> Yes, it looks strange. Could you please share your DDLs and queries that
>>> led to the exception? We can add a compatibility test and it will help us
>>> to investigate the issue.
>>>
>>> Maksim
>>>
>>> On Tue, Feb 15, 2022 at 3:28 PM John Smith 
>>> wrote:
>>>
>>>> It's weird. I dropped the table and recreated it without restarting the
>>>> client applications and it started worked.
>>>>
>>>> This hapenned after upgrading from 2.8.1 to 2.12.0
>>>>
>>>> What's even funnier. I did the upgrade on my dev cluster first and let
>>>> everything run for a couple weeks just to be sure.
>>>>
>>>> On Tue., Feb. 15, 2022, 3:13 a.m. Maksim Timonin, <
>>>> timoninma...@apache.org> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> It looks like you have a column with non-String type, but try to query
>>>>> it with a String argument.
>>>>>
>>>>> Could you please share your DDL for table and query parameters for
>>>>> this query?
>>>>>
>>>>> Thanks,
>>>>> Maksim
>>>>>
>>>>> On Tue, Feb 15, 2022 at 8:54 AM John Smith 
>>>>> wrote:
>>>>>
>>>>>> Hi, on the client side I'm getting the below Exception and on the
>>>>>> server side it is pasted below.
>>>>>>
>>>>>>
>>>>>> javax.cache.CacheException: Failed to execute map query on remote
>>>>>> node [nodeId=6e350b53-7224-4b11-b81b-00f44c699b87, errMsg=General error:
>>>>>> \"class org.apache.ignite.IgniteCheckedException: Runtime failure on 
>>>>>> lookup
>>>>>> row: IndexSearchRowImpl
>>>>>> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@d3e431c]\";
>>>>>> SQL statement:\nSELECT\n__Z0.XX_ID __C0_0\nFROM PUBLIC.XX_CODE
>>>>>> __Z0\nWHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)\nORDER 
>>>>>> BY 1
>>>>>> LIMIT 1 [5-197]]\n\tat
>>>>>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:235)\n\tat
>>>>>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:214)\n\tat
>>>>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2193)\n\tat
>>>>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$22(IgniteH2Indexing.java:2132)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3480)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)\n\tat
>>>>>> org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)\n\tat
>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(Threa

Re: Spatial index broken and breaking cluster when created (2.12.0)

2022-02-21 Thread Maksim Timonin
Hi Rafael,

I investigated your issue. I'm sorry, It's actually a bug, it's impossible
to use client nodes for running geospatial indexes in 2.12. It should be
fixed, and it will be a part of future release 2.13. Thank you for finding
that!

But, it is still possible to use server nodes or thin clients to work with
spatial indexes. Your test passed with using IgniteClient instead of a
client node. Please, check the docs describing thin clients [1]. Also, it'
s possible to run applications with server nodes.

If you have any questions, please feel free to ask me.

[1]
https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients

Thanks,
Maksim



On Tue, Feb 15, 2022 at 1:35 PM Troilo, Rafael 
wrote:

> Hi Maksim,
>
>
> thank you for your fast response.
>
> I attached you a very simple test case and hope that you can reproduce the
> exception(s).
>
> It also happens in an distributed  environment with the server(s) on other
> nodes.
>
>
> Btw. it works as expected with ignite 2.10.0 and previous releases.  And
> very likely get broken with the refactoring of the indexing module.
>
> https://issues.apache.org/jira/browse/IGNITE-13056​
>
>
> https://github.com/apache/ignite/commit/35b3528fa020fd5bf1dce03cd2fc63508c9912a3
>
>
> This test will fail:
>
> ​
> @Test
> public void testLoadAndQueryNewClientAccess() {
> try (var server = server()) {
> try (var client = client()) {
> var indexedCache = client.createCache(new CacheConfiguration MapPoint>()
> .setName(CACHE_NAME)
> .setIndexedTypes(Long.class, MapPoint.class));
> load(indexedCache);
> assertEquals(1_000, indexedCache.size());
> queryIndexed(indexedCache);
> }
> try (var client = client()) {
> // just connect a new client will case a exception in the log
> // Caused by: java.lang.NullPointerException
> //   at
> org.apache.ignite.internal.processors.query.h2.opt.GeoSpatialUtils.createIndex(GeoSpatialUtils.java:63)
> // accessing the cache finally results in a fail
> // Caused by: class org.apache.ignite.IgniteCheckedException: Type with
> name 'MapPoint' already indexed in cache 'points'.
> //   at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:2158)
> var cache = client.cache(CACHE_NAME);
> assertEquals(1_000, cache.size());
> queryIndexed(cache);
> }
> }
> }
>
>
>
> Thank you for your time.
>
> Best,
>
> Rafael
>
>
>
> --
> Rafael Troilo
> HeiGIT gGmbH
> Heidelberg Institute for Geoinformation Technology at Heidelberg University
>
> https://heigit.org | rafael.tro...@heigit.org | phone +49-6221-533 484
>
> Postal address: Schloss-Wolfsbrunnenweg 33 | 69118 Heidelberg | Germany 
> Offices: Berliner Str. 45 | 69120 Heidelberg | Germany
>
> Amtsgericht Mannheim | HRB 733765 Managing Directors: Prof. Dr. Alexander 
> Zipf | Dr. Gesa Schönberger
>
> --
> *From:* Maksim Timonin 
> *Sent:* Tuesday, February 15, 2022 9:00 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Spatial index broken and breaking cluster when created
> (2.12.0)
>
> Hi Rafael,
>
> I check the example from the paper you mentioned, and it works for me, for
> multiple clients too. Could you please provide code for your test case?
> Getting NPE is definitely a bug, and we should fix it. But maybe we have a
> workaround for your case.
>
> Thanks,
> Maksim
>
>
>
> On Mon, Feb 14, 2022 at 8:12 PM Troilo, Rafael 
> wrote:
>
>> Hi everyone,
>>
>> I try to use the ignite-geospatial extension[1] for ignite 2.12.0.
>>
>> But as soon as I create a spatial index with
>>
>> - sql: CREATE spatial INDEX ...
>> - or via annotated classes like the MapPoint example [2]
>>
>> every new client connecting to the cluster get an NullPointerException
>> message. The reason for that is a not set/null `GridCacheContext`.
>>
>> org.apache.ignite.internal.processors.query.h2.opt.GeoSpatialUtils.createIndex(GeoSpatialUtils.java:63)
>>
>> The client, creating the cache with index on geometry (MapPoint example),
>> can access the cache and even do a spatial query as long it stays connected
>> to the cluster.
>>
>> I could provide a Testcase if you like.
>>
>> Dose anyone have a working spatial index example?
>>
>> Thank you for your support.
>> Best,
>> Rafael
>>
>> SEVERE: Can't initialize query structures for not started cache
>> [cacheName=points]
>> class org.apache.ignite.IgniteException: Failed to instantiate:
>> org.apache.ignite.internal.processors.query.h2.opt.GridH2SpatialIndex
>> at
>> org.apache.ignite.intern

Re: What is org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:

2022-02-21 Thread Maksim Timonin
Hi,

Yes, it looks strange. Could you please share your DDLs and queries that
led to the exception? We can add a compatibility test and it will help us
to investigate the issue.

Maksim

On Tue, Feb 15, 2022 at 3:28 PM John Smith  wrote:

> It's weird. I dropped the table and recreated it without restarting the
> client applications and it started worked.
>
> This hapenned after upgrading from 2.8.1 to 2.12.0
>
> What's even funnier. I did the upgrade on my dev cluster first and let
> everything run for a couple weeks just to be sure.
>
> On Tue., Feb. 15, 2022, 3:13 a.m. Maksim Timonin, 
> wrote:
>
>> Hi,
>>
>> It looks like you have a column with non-String type, but try to query it
>> with a String argument.
>>
>> Could you please share your DDL for table and query parameters for this
>> query?
>>
>> Thanks,
>> Maksim
>>
>> On Tue, Feb 15, 2022 at 8:54 AM John Smith 
>> wrote:
>>
>>> Hi, on the client side I'm getting the below Exception and on the server
>>> side it is pasted below.
>>>
>>>
>>> javax.cache.CacheException: Failed to execute map query on remote node
>>> [nodeId=6e350b53-7224-4b11-b81b-00f44c699b87, errMsg=General error: \"class
>>> org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:
>>> IndexSearchRowImpl
>>> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@d3e431c]\";
>>> SQL statement:\nSELECT\n__Z0.XX_ID __C0_0\nFROM PUBLIC.XX_CODE
>>> __Z0\nWHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)\nORDER BY 1
>>> LIMIT 1 [5-197]]\n\tat
>>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:235)\n\tat
>>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:214)\n\tat
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2193)\n\tat
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$22(IgniteH2Indexing.java:2132)\n\tat
>>> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3480)\n\tat
>>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)\n\tat
>>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)\n\tat
>>> org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242)\n\tat
>>> org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)\n\tat
>>> org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)\n\tat
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>>> java.lang.Thread.run(Thread.java:748)\n
>>>
>>> [05:42:24,158][SEVERE][query-#1422%xx%][GridMapQueryExecutor] Failed
>>> to execute local query.
>>> class org.apache.ignite.internal.processors.query.IgniteSQLException:
>>> General error: "class org.apache.ignite.IgniteCheckedException: Runtime
>>> failure on lookup row: IndexSearchRowImpl
>>> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@75eb2111]";
>>> SQL statement:
>>> SELECT
>>> __Z0.XX_ID __C0_0
>>> FROM PUBLIC.XX_CODE __Z0
>>> WHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)
>>> ORDER BY 1 LIMIT 1 [5-197]
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:875)
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:962)
>>> at
>>> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:454)
>>> at
>>> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:274)
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2187)
>>> at
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$22(IgniteH2Indexing.java:2132)
>>> at
>>> org.apache.ignite.internal.managers.communication.GridIoManager$Arr

Re: h2 vulnerability CVE-2021-23463

2022-02-21 Thread Maksim Timonin
Hi Marcus,

> The package com.h2database:h2 from 1.4.198 and before 2.0.202 are
vulnerable to XML

Ignite uses version 1.4.197, so Ignite isn't impacted by this CVE.

Maksim


On Mon, Feb 21, 2022 at 10:16 AM Lo, Marcus  wrote:

> Hi,
>
>
>
> Is Ignite impacted by h2 vulnerability CVE-2021-23463? Thanks.
>
>
>
> https://nvd.nist.gov/vuln/detail/CVE-2021-23463
>
>
>
> Regards,
>
> Marcus
>
>
>


Re: What is org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:

2022-02-15 Thread Maksim Timonin
Hi,

It looks like you have a column with non-String type, but try to query it
with a String argument.

Could you please share your DDL for table and query parameters for this
query?

Thanks,
Maksim

On Tue, Feb 15, 2022 at 8:54 AM John Smith  wrote:

> Hi, on the client side I'm getting the below Exception and on the server
> side it is pasted below.
>
>
> javax.cache.CacheException: Failed to execute map query on remote node
> [nodeId=6e350b53-7224-4b11-b81b-00f44c699b87, errMsg=General error: \"class
> org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:
> IndexSearchRowImpl
> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@d3e431c]\";
> SQL statement:\nSELECT\n__Z0.XX_ID __C0_0\nFROM PUBLIC.XX_CODE
> __Z0\nWHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)\nORDER BY 1
> LIMIT 1 [5-197]]\n\tat
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:235)\n\tat
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:214)\n\tat
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2193)\n\tat
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$22(IgniteH2Indexing.java:2132)\n\tat
> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3480)\n\tat
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)\n\tat
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)\n\tat
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242)\n\tat
> org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)\n\tat
> org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
> java.lang.Thread.run(Thread.java:748)\n
>
> [05:42:24,158][SEVERE][query-#1422%xx%][GridMapQueryExecutor] Failed
> to execute local query.
> class org.apache.ignite.internal.processors.query.IgniteSQLException:
> General error: "class org.apache.ignite.IgniteCheckedException: Runtime
> failure on lookup row: IndexSearchRowImpl
> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@75eb2111]";
> SQL statement:
> SELECT
> __Z0.XX_ID __C0_0
> FROM PUBLIC.XX_CODE __Z0
> WHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)
> ORDER BY 1 LIMIT 1 [5-197]
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:875)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:962)
> at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:454)
> at
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:274)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2187)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$22(IgniteH2Indexing.java:2132)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3480)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1421)
> at
> org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.h2.jdbc.JdbcSQLException: General error: "class
> org.apache.ignite.IgniteCheckedException: Runtime failure on lookup row:
> IndexSearchRowImpl
> [rowHnd=org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler@75eb2111]";
> SQL statement:
> SELECT
> __Z0.XX_ID __C0_0
> FROM PUBLIC.XX_CODE __Z0
> WHERE (__Z0.PROVIDER_ID = ?1) AND (__Z0.XX_CODE = ?2)
> ORDER BY 1 LIMIT 1 [5-197]
> at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> at org.h2.message.DbException.get(DbException.java:168)
> at 

Re: Spatial index broken and breaking cluster when created (2.12.0)

2022-02-15 Thread Maksim Timonin
Hi Rafael,

I check the example from the paper you mentioned, and it works for me, for
multiple clients too. Could you please provide code for your test case?
Getting NPE is definitely a bug, and we should fix it. But maybe we have a
workaround for your case.

Thanks,
Maksim



On Mon, Feb 14, 2022 at 8:12 PM Troilo, Rafael 
wrote:

> Hi everyone,
>
> I try to use the ignite-geospatial extension[1] for ignite 2.12.0.
>
> But as soon as I create a spatial index with
>
> - sql: CREATE spatial INDEX ...
> - or via annotated classes like the MapPoint example [2]
>
> every new client connecting to the cluster get an NullPointerException
> message. The reason for that is a not set/null `GridCacheContext`.
>
> org.apache.ignite.internal.processors.query.h2.opt.GeoSpatialUtils.createIndex(GeoSpatialUtils.java:63)
>
> The client, creating the cache with index on geometry (MapPoint example),
> can access the cache and even do a spatial query as long it stays connected
> to the cluster.
>
> I could provide a Testcase if you like.
>
> Dose anyone have a working spatial index example?
>
> Thank you for your support.
> Best,
> Rafael
>
> SEVERE: Can't initialize query structures for not started cache
> [cacheName=points]
> class org.apache.ignite.IgniteException: Failed to instantiate:
> org.apache.ignite.internal.processors.query.h2.opt.GridH2SpatialIndex
> at
> org.apache.ignite.internal.processors.query.h2.H2Utils.createSpatialIndex(H2Utils.java:332)
> 
> Caused by: java.lang.NullPointerException
> at
> org.apache.ignite.internal.processors.query.h2.opt.GeoSpatialUtils.createIndex(GeoSpatialUtils.java:63)
>
> Exception in thread "main" javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Type with name 'MapPoint' already
> indexed in cache 'points'.
> Caused by: class org.apache.ignite.IgniteCheckedException: Type with name
> 'MapPoint' already indexed in cache 'points'.
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:2158)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:1029)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:1096)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1994)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$55a0e703$1(GridCacheProcessor.java:1864)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$15(GridCacheProcessor.java:1816)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1861)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1815)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:481)
> at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesRequests(CacheAffinitySharedManager.java:702)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:446)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:3135)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3280)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3197)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
> at java.base/java.lang.Thread.run(Thread.java:830)
>
>
> [1] https://github.com/apache/ignite/tree/master/modules/geospatial
> [2]
> https://www.gridgain.com/resources/blog/geospatial-queries-apache-ignite
>
>
> --
> Rafael Troilo
> HeiGIT gGmbH
> Heidelberg Institute for Geoinformation Technology at Heidelberg University
>
> https://heigit.org | rafael.tro...@heigit.org | phone +49-6221-533 484
>
> Postal address: Schloss-Wolfsbrunnenweg 33 | 69118 Heidelberg | Germany
> Offices: Berliner Str. 45 | 69120 Heidelberg | Germany
>
> Amtsgericht Mannheim | HRB 733765
> Managing Directors: Prof. Dr. Alexander Zipf | Dr. Gesa Schönberger
>


Re: Re: Ignite data not in sync

2022-02-14 Thread Maksim Timonin
Hi Sachin,

There is a command that can help you to find which backup is inconsistent:
https://ignite.apache.org/docs/latest/tools/control-script#verifying-partition-checksums

Then you can clean only a single node which has a problem, and then Ignite
will automatically rebalance data for the new node.

Since Apache Ignite 2.11 there is the ReadRepair feature [1]. It
actually does what you want to do. But it is experimental for 2.11 and
2.12, and will be more solid in 2.13.

[1] https://ignite.apache.org/docs/2.11.1/read-repair

On Mon, Feb 14, 2022 at 7:44 AM  wrote:

> Thanks for responding.
>
>
>
> Yes that is correct. So we are looking for some command from which we can
> repair the data. This cache is configured with one backup.
>
> Possible solution we can think of
>
>
>
> 1)  Export the entire cache and import it back. But it has 200million
> rows. So we can have extended downtime.
>
> 2)  Remove all the three node one at a time, clean them, add them
> back to cluster and let ignite rebuild them as new nodes.
>
> 3)  Change the cache config with readFromBackup = false. So we get
> consistent data.
>
>
>
>
>
> Thanks,
>
> Sachin.
>
>
>
>
>
> *From:* Maksim Timonin 
> *Sent:* 12 February 2022 00:35
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: Ignite data not in sync
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hi!
>
>
>
> Am I correctly understanding your case:
>
> 1. There is a 3-node cluster
>
> 2. You restart one node
>
> 3. After restart you query a cache with the same query, and sometimes it
> returns a value for the field "CLERK_ID" and sometimes not.
>
>
>
> Maksim
>
>
>
> On Fri, Feb 11, 2022 at 11:27 AM  wrote:
>
> We are running Apache Ignite version 2.7, It is three node cluster.
>
> After restart of one of the Ignite node, we are getting inconsistent
> select result on one of our cache.
>
>
>
> Is there any way we can remove this inconsistency ? Below is the couple
> sql which clerk_id is intermittently visible.
>
>
>
>
>
>
>
>
>
>
>
> 0: jdbc:ignite:thin://10.142.49.43:10800
> <https://protect2.fireeye.com/v1/url?k=ae202bb7-f1bb6940-ae21a148-0cc47adc5e70-d32f15a40df0f7bf=1=7d299f0b-8ec3-41cd-987b-f98a402bbf31=http%3A%2F%2F10.142.49.43%3A10800%2F>>
> select clerk_id,tstamp_trans from crawler_fin_l_r where
> UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> | 1212652456 | 2022011018152249   |
>
> +++
>
> 1 row selected (0.017 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800
> <https://protect2.fireeye.com/v1/url?k=8d6ee83f-d2f5aac8-8d6f62c0-0cc47adc5e70-0f4ca0bf14700892=1=7d299f0b-8ec3-41cd-987b-f98a402bbf31=http%3A%2F%2F10.142.49.43%3A10800%2F>>
> select clerk_id,tstamp_trans from crawler_fin_l_r where
> UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> | 1212652456 | 2022011018152249   |
>
> +++
>
> 1 row selected (0.002 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800
> <https://protect2.fireeye.com/v1/url?k=e708ebd4-b893a923-e709612b-0cc47adc5e70-5643327425bdbf64=1=7d299f0b-8ec3-41cd-987b-f98a402bbf31=http%3A%2F%2F10.142.49.43%3A10800%2F>>
> select clerk_id,tstamp_trans from crawler_fin_l_r where
> UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> || 2022011018152249   |
>
> +++
>
> 1 row selected (0.003 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800
> <https://protect2.fireeye.com/v1/url?k=dba7e7df-843ca528-dba66d20-0cc47adc5e70-07f1ace70ea8e997=1=7d299f0b-8ec3-41cd-987b-f98a402bbf31=http%3A%2F%2F10.142.49.43%3A10800%2F>>
> select clerk_id,tstamp_trans from crawler_fin_l_r where
> UNIQUENESS_KEY='893689191';
>
> +

Re: Ignite JOIN fails to return results using WHERE clause

2022-02-14 Thread Maksim Timonin
Hi Courtney,

> Is it the case that as long as the affinity key is in the join predicate
that it would be a colocated JOIN

This is true for cases when you have a predicate with equality by affinity
keys. If a join predicate includes affinity keys eq condition then it also
can have any other conditions.

In your case, you have 2 tables and affinity keys T0.releaseId, T1.t0Id.
Then the valid join predicate will be: "T0.releaseId = T1.t0Id and (...
anything you wish)"

> In other words, if both tables share the same affinity key is it still a
collocated join if there are other filters in the join predicate?

Yes.

You can check examples of valid and non-valid joins here:
https://github.com/timoninmaxim/ignite/blob/master/modules/indexing/src/test/java/org/apache/ignite/sqltests/CheckWarnJoinPartitionedTables.java

Since Apache Ignite 2.12 it writes a warning message for SQL queries with
non-colocated joins.


On Sat, Feb 12, 2022 at 10:04 AM Courtney Robinson <
courtney.robin...@crlog.info> wrote:

> Hi Maksim,
>
> Interesting, thanks for your reply.
> Okay, I misunderstood (I also thought being on a single node that it
> didn't matter).
>
> Is it the case that as long as the affinity key is in the join predicate
> that it would be a colocated JOIN (I'm concerned about the impact of
> setDistributedJoins(true))?
> Or is it the case that if you're joining on partitioned tables, you must
> do so with ONLY the affinity key in the join predicate?
>
> SELECT tbl.releaseId, tbl.name FROM T0 tbl
>
>  INNER JOIN T1 col ON tbl.releaseId = 
>
>
> In the previous tables, T1 does not have the releaseId as a column so does
> that mean it is impossible to do a co-located JOIN with this setup?
>
> If we modify T1 so that it also has releaseId and we make releaseId the
> affinity key of T1 will both of these work?
>
> SELECT tbl.releaseId, tbl.name FROM T0 tbl
>
>  INNER JOIN T1 col ON tbl.releaseId = col.releaseId
>
>
> AND
>
> SELECT tbl.releaseId, tbl.name FROM T0 tbl
>
>  INNER JOIN T1 col ON tbl.releaseId = col.releaseId AND col.tableId =
>> tbl.id AND col.x = y
>
>
> In other words, if both tables share the same affinity key is it still a
> collocated join if there are other filters in the join predicate?
>
> If the answer to this yes, does it matter if the filters in the join
> predicate are all = i.e. does it have to be an equi-join? or could the
> predicate be
>
>> ON tbl.releaseId = col.releaseId AND* col.tableId > tbl.id
>> <http://tbl.id/>* AND *col.x >= y*
>>
>
> Thanks
>
> On Fri, Feb 11, 2022 at 6:42 PM Maksim Timonin 
> wrote:
>
>> Hi Courtney,
>>
>> > I don't expect collocation issues to be in play here
>>
>> Do you check this doc:
>> https://ignite.apache.org/docs/latest/SQL/distributed-joins ?
>>
>> It says: "A distributed join is a SQL statement with a join clause that
>> combines two or more partitioned tables. If the tables are joined on the
>> partitioning column (affinity key), the join is called a colocated join.
>> Otherwise, it is called a non-colocated join"
>>
>> You definitely have a collocation issue due to non-collocated join: T0
>> partitioned by "releaseId", T1 by "t0Id", and you make a join by columns
>> that aren't affinity columns (id = tableId).
>>
>> You should specify the flag "SqlFieldsQuery.setDistributedJoins(true)" to
>> make your join return correct results.
>>
>> Maksim
>>
>>
>> On Fri, Feb 11, 2022 at 8:09 PM Courtney Robinson <
>> courtney.robin...@crlog.info> wrote:
>>
>>>
>>> I have a query like this:
>>>
>>> SELECT
>>>> tbl.id AS tbl_id, tbl.releaseId AS tbl_releaseId, col.type AS col_type
>>>> FROM T0 tbl
>>>> INNER JOIN T1 col ON tbl.id = col.tableId
>>>> *WHERE tbl.releaseId = ? AND tbl.name <http://tbl.name> = ?*
>>>> LIMIT 100
>>>>
>>>
>>> This returns no results so after investigating, I ended up changing it
>>> to the below
>>>
>>> SELECT
>>>> tbl.id AS tbl_id, tbl.releaseId AS tbl_releaseId, col.type AS col_type
>>>> FROM *(SELECT * FROM T0 t WHERE t.releaseId = ? AND t.name
>>>> <http://t.name> = ?) *tbl
>>>> INNER JOIN T1 col ON tbl.id = col.tableId
>>>> LIMIT 100
>>>>
>>>
>>>  This returns the results expected.
>>> Can anyone offer any insight into what is going wrong here?
>>>
>>> The tables here look like this (I removed some columns from the tables
>>> and the query to he

Re: Ignite data not in sync

2022-02-11 Thread Maksim Timonin
Hi!

Am I correctly understanding your case:
1. There is a 3-node cluster
2. You restart one node
3. After restart you query a cache with the same query, and sometimes it
returns a value for the field "CLERK_ID" and sometimes not.

Maksim

On Fri, Feb 11, 2022 at 11:27 AM  wrote:

> We are running Apache Ignite version 2.7, It is three node cluster.
>
> After restart of one of the Ignite node, we are getting inconsistent
> select result on one of our cache.
>
>
>
> Is there any way we can remove this inconsistency ? Below is the couple
> sql which clerk_id is intermittently visible.
>
>
>
>
>
>
>
>
>
>
>
> 0: jdbc:ignite:thin://10.142.49.43:10800> select clerk_id,tstamp_trans
> from crawler_fin_l_r where UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> | 1212652456 | 2022011018152249   |
>
> +++
>
> 1 row selected (0.017 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800> select clerk_id,tstamp_trans
> from crawler_fin_l_r where UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> | 1212652456 | 2022011018152249   |
>
> +++
>
> 1 row selected (0.002 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800> select clerk_id,tstamp_trans
> from crawler_fin_l_r where UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> || 2022011018152249   |
>
> +++
>
> 1 row selected (0.003 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800> select clerk_id,tstamp_trans
> from crawler_fin_l_r where UNIQUENESS_KEY='893689191';
>
> +++
>
> |CLERK_ID|  TSTAMP_TRANS  |
>
> +++
>
> || 2022011018152249   |
>
> +++
>
> 1 row selected (0.002 seconds)
>
> 0: jdbc:ignite:thin://10.142.49.43:10800>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Ignite JOIN fails to return results using WHERE clause

2022-02-11 Thread Maksim Timonin
Hi Courtney,

> I don't expect collocation issues to be in play here

Do you check this doc:
https://ignite.apache.org/docs/latest/SQL/distributed-joins ?

It says: "A distributed join is a SQL statement with a join clause that
combines two or more partitioned tables. If the tables are joined on the
partitioning column (affinity key), the join is called a colocated join.
Otherwise, it is called a non-colocated join"

You definitely have a collocation issue due to non-collocated join: T0
partitioned by "releaseId", T1 by "t0Id", and you make a join by columns
that aren't affinity columns (id = tableId).

You should specify the flag "SqlFieldsQuery.setDistributedJoins(true)" to
make your join return correct results.

Maksim


On Fri, Feb 11, 2022 at 8:09 PM Courtney Robinson <
courtney.robin...@crlog.info> wrote:

>
> I have a query like this:
>
> SELECT
>> tbl.id AS tbl_id, tbl.releaseId AS tbl_releaseId, col.type AS col_type
>> FROM T0 tbl
>> INNER JOIN T1 col ON tbl.id = col.tableId
>> *WHERE tbl.releaseId = ? AND tbl.name  = ?*
>> LIMIT 100
>>
>
> This returns no results so after investigating, I ended up changing it to
> the below
>
> SELECT
>> tbl.id AS tbl_id, tbl.releaseId AS tbl_releaseId, col.type AS col_type
>> FROM *(SELECT * FROM T0 t WHERE t.releaseId = ? AND t.name
>>  = ?) *tbl
>> INNER JOIN T1 col ON tbl.id = col.tableId
>> LIMIT 100
>>
>
>  This returns the results expected.
> Can anyone offer any insight into what is going wrong here?
>
> The tables here look like this (I removed some columns from the tables and
> the query to help make it easier on the eyes to parse):
>
> CREATE TABLE IF NOT EXISTS T0
>> (
>>   idLONG,
>>   releaseId VARCHAR,
>>   name  VARCHAR,
>>   PRIMARY KEY (releaseId, id)
>> ) WITH "template=hypi_tpl,affinity_key=releaseId";
>
> CREATE INDEX IF NOT EXISTS VirtualTable_idx0 ON VirtualTable (releaseId,
>> name);
>>
>> CREATE TABLE IF NOT EXISTS T1
>> (
>>   id  LONG,
>>   t0Id LONG,
>>   nameVARCHAR,
>>   typeVARCHAR,
>>   PRIMARY KEY (t0Id, id)
>> ) WITH "template=hypi_tpl,affinity_key=t0Id";
>>
>
> Note here it is a single node locally (so I don't expect collocation
> issues to be in play here) - in development so not in a production cluster
> yet.
> Running Ignite 2.8.0
>
> This is not the first time we've had something like this but it's the
> first time I've been able to reproduce it myself and consistently.
>
> Best,
> Courtney
>


Re: using multiple columns Index

2021-12-16 Thread Maksim Timonin
Hi, Chrystophe!

> But do you have a tweak in mind to get better response time on this kind
of request ?

Do you mean queries that return an empty result set? Unfortunately, I'm not
aware of ways of improving performance of such queries.

Also, I don't know the task you're actually solving. Maybe it's possible to
avoid such queries by replacing it with some different queries or logic, or
data schema, if latency is critical.


On Thu, Dec 16, 2021 at 5:26 PM Chrystophe Vergnaud <
chrystophe.vergn...@gmail.com> wrote:

> Hi Maxim,
>
> Thanks for the details.
>
> But do you have a tweak in mind to get better response time on this kind
> of request ?
>
> BR,
>
> Chrystophe Vergnaud
> Architect @ Cyblex Technologies
>
>
>
> Le jeu. 16 déc. 2021 à 15:09, Maksim Timonin  a
> écrit :
>
>> Hi, Chrystophe!
>>
>> Multifield index should perfectly work for cases with strict equality
>> like: (a == ? && b == ?) OR (a == ? && b > ?) . But for queries with range
>> queries for first field "a" ("a" > ? && b ...) you should not expect a
>> boost of performance. You're right, it's due to B+Tree implementation - we
>> store data in pairs (a, b). And then in the index storage the sequence will
>> be: (1, 1), (1, 2), (1, 3), (2, 1). We sort by A, and only in case of
>> equality field A, we check field B.
>>
>> So for queries like (a > 0 && b < 10) there is not much help from the
>> condition on B for reducing data slice. Our implementation doesn't skip
>> some sub-trees for conditions, but it checks tree range sequentially.
>>
>> > Worst, if the value of b is not present in the slice, it is responding
>> as if the b was not in the WHERE clause at all (it seems to run a full scan
>> on the sub-result)
>>
>> I think it can depend on the amount of data you return. If there is no
>> data suitable for your condition, you will hang until the query finishes.
>> But if you have some, it will return the cursor earlier, after preparing
>> the first page for response, see SqlFieldsQuery.setPageSize().
>>
>> Also, performance may depend on your index selectivity.
>>
>>
>> On Thu, Dec 16, 2021 at 4:23 PM Chrystophe Vergnaud <
>> chrystophe.vergn...@gmail.com> wrote:
>>
>>> Hello Stephen,
>>>
>>> I was created with SQL :
>>> CREATE INDEX IF NOT EXISTS "t_idx_1" ON MYSCHEMA."t" ("a", "b", "c");
>>>
>>> BR,
>>>
>>> Chrystophe Vergnaud
>>> Architect @ Cyblex Technologies
>>>
>>>
>>> Le jeu. 16 déc. 2021 à 13:51, Stephen Darlington <
>>> stephen.darling...@gridgain.com> a écrit :
>>>
>>>> Can you show how you’ve defined your index(es)?
>>>>
>>>> > On 16 Dec 2021, at 12:27, Chrystophe Vergnaud <
>>>> chrystophe.vergn...@gmail.com> wrote:
>>>> >
>>>> > Hello,
>>>> >
>>>> > I'm running an ignite 2.10 and I don't understand the behavior of the
>>>> multi-columns index.
>>>> >
>>>> > For instance, I have a table t(id, a,b,c, d, e, f, g)
>>>> > - id is a uuid and is the key
>>>> > - a is a TIMESTAMP
>>>> > - b is a  SMALLINT
>>>> > - c is a TINYINT
>>>> > - e, f, g are VARCHAR
>>>> >
>>>> > this table have around 200M lines
>>>> >
>>>> > I have to select data based on a, b, c in this order, so basically, I
>>>> have setup an index on (a,b,c)
>>>> >
>>>> > If I apply a select with a WHERE clause on "a>=x AND a < y", it works
>>>> perfectly, the response time is ok (using USE INDEX)
>>>> >
>>>> > If I add the b in the WHERE clause I expect to optimize the response
>>>> time but it is not the case. Worst, if the value of b is not present in the
>>>> slice, it is responding as if the b was not in the WHERE clause at all (it
>>>> seems to run a full scan on the sub-result)
>>>> >
>>>> > Do I miss something ? is it related to the implementation of the
>>>> B+tree ?
>>>> >
>>>> > Thanks in advance for your help.
>>>> >
>>>> > Best regards,
>>>> >
>>>> > Chrystophe Vergnaud
>>>> > Architect @ Cyblex Technologies
>>>>
>>>>
>>>>


Re: using multiple columns Index

2021-12-16 Thread Maksim Timonin
Hi, Chrystophe!

Multifield index should perfectly work for cases with strict equality like:
(a == ? && b == ?) OR (a == ? && b > ?) . But for queries with range
queries for first field "a" ("a" > ? && b ...) you should not expect a
boost of performance. You're right, it's due to B+Tree implementation - we
store data in pairs (a, b). And then in the index storage the sequence will
be: (1, 1), (1, 2), (1, 3), (2, 1). We sort by A, and only in case of
equality field A, we check field B.

So for queries like (a > 0 && b < 10) there is not much help from the
condition on B for reducing data slice. Our implementation doesn't skip
some sub-trees for conditions, but it checks tree range sequentially.

> Worst, if the value of b is not present in the slice, it is responding as
if the b was not in the WHERE clause at all (it seems to run a full scan on
the sub-result)

I think it can depend on the amount of data you return. If there is no data
suitable for your condition, you will hang until the query finishes. But if
you have some, it will return the cursor earlier, after preparing the first
page for response, see SqlFieldsQuery.setPageSize().

Also, performance may depend on your index selectivity.


On Thu, Dec 16, 2021 at 4:23 PM Chrystophe Vergnaud <
chrystophe.vergn...@gmail.com> wrote:

> Hello Stephen,
>
> I was created with SQL :
> CREATE INDEX IF NOT EXISTS "t_idx_1" ON MYSCHEMA."t" ("a", "b", "c");
>
> BR,
>
> Chrystophe Vergnaud
> Architect @ Cyblex Technologies
>
>
> Le jeu. 16 déc. 2021 à 13:51, Stephen Darlington <
> stephen.darling...@gridgain.com> a écrit :
>
>> Can you show how you’ve defined your index(es)?
>>
>> > On 16 Dec 2021, at 12:27, Chrystophe Vergnaud <
>> chrystophe.vergn...@gmail.com> wrote:
>> >
>> > Hello,
>> >
>> > I'm running an ignite 2.10 and I don't understand the behavior of the
>> multi-columns index.
>> >
>> > For instance, I have a table t(id, a,b,c, d, e, f, g)
>> > - id is a uuid and is the key
>> > - a is a TIMESTAMP
>> > - b is a  SMALLINT
>> > - c is a TINYINT
>> > - e, f, g are VARCHAR
>> >
>> > this table have around 200M lines
>> >
>> > I have to select data based on a, b, c in this order, so basically, I
>> have setup an index on (a,b,c)
>> >
>> > If I apply a select with a WHERE clause on "a>=x AND a < y", it works
>> perfectly, the response time is ok (using USE INDEX)
>> >
>> > If I add the b in the WHERE clause I expect to optimize the response
>> time but it is not the case. Worst, if the value of b is not present in the
>> slice, it is responding as if the b was not in the WHERE clause at all (it
>> seems to run a full scan on the sub-result)
>> >
>> > Do I miss something ? is it related to the implementation of the B+tree
>> ?
>> >
>> > Thanks in advance for your help.
>> >
>> > Best regards,
>> >
>> > Chrystophe Vergnaud
>> > Architect @ Cyblex Technologies
>>
>>
>>


Re: [2.11.0]Regression: Ignite node crash(CorruptedTreeException: B+Tree is corrupted)

2021-11-19 Thread Maksim Timonin
Hi!

FYI, I fixed this bug, fix will be available in 2.12 version. It will be
released soon.

On Thu, Nov 18, 2021 at 1:27 PM 18624049226 <18624049...@163.com> wrote:

> Hello,
>
> I know how to avoid this issue, I raise this problem because it does not
> exist in version 2.10.
>
> I think the issues related to SQL Engine and numerical calculation are
> relatively more serious, because they may have a wide impact.
> 在 2021/11/18 17:53, Maksim Timonin 写道:
>
> Yes, I reproduced it.
>
> This is a bug related to calculation of a float value within a SQL query.
> Right now, I see some workarounds:
> 1. Calculate the float value in your application code, and put it as an
> argument in SqlFieldsQuery.setArgs(), if possible.
> 2. Use 100 in a separate column. For the query you wrote, SQL doesn't use
> the index in the query plan.
>
> Thanks!
>
> On Thu, Nov 18, 2021 at 12:21 PM 18624049226 <18624049...@163.com> wrote:
>
>> No configuration is required, only start with ignite.sh.
>>
>> full log is attached.
>> 在 2021/11/18 17:10, Maksim Timonin 写道:
>>
>> Hi, I failed to reproduce it. Could you please provide more details? Some
>> about nodes configuration, cache configurations, etc.
>>
>> Also could you please provide a full stack trace of the exception?
>>
>> Thanks
>>
>> On Thu, Nov 18, 2021 at 11:50 AM 18624049226 <18624049...@163.com> wrote:
>>
>>> Hi,
>>>
>>> The following issue occurred in version 2.11 and not in version 2.10.
>>>
>>> create table test1(id varchar primary key, flag long);
>>>
>>> create index index_test1_flag on test1 (flag);
>>>
>>> insert into test1(id, flag) values('a1', 1);
>>>
>>> select * from test1 where flag<100*0.5;
>>>
>>> then ignite crashed.
>>>
>>> https://issues.apache.org/jira/browse/IGNITE-15943
>>>
>>>
>>>


Re: [2.11.0]Regression: Ignite node crash(CorruptedTreeException: B+Tree is corrupted)

2021-11-18 Thread Maksim Timonin
Yes, I reproduced it.

This is a bug related to calculation of a float value within a SQL query.
Right now, I see some workarounds:
1. Calculate the float value in your application code, and put it as an
argument in SqlFieldsQuery.setArgs(), if possible.
2. Use 100 in a separate column. For the query you wrote, SQL doesn't use
the index in the query plan.

Thanks!

On Thu, Nov 18, 2021 at 12:21 PM 18624049226 <18624049...@163.com> wrote:

> No configuration is required, only start with ignite.sh.
>
> full log is attached.
> 在 2021/11/18 17:10, Maksim Timonin 写道:
>
> Hi, I failed to reproduce it. Could you please provide more details? Some
> about nodes configuration, cache configurations, etc.
>
> Also could you please provide a full stack trace of the exception?
>
> Thanks
>
> On Thu, Nov 18, 2021 at 11:50 AM 18624049226 <18624049...@163.com> wrote:
>
>> Hi,
>>
>> The following issue occurred in version 2.11 and not in version 2.10.
>>
>> create table test1(id varchar primary key, flag long);
>>
>> create index index_test1_flag on test1 (flag);
>>
>> insert into test1(id, flag) values('a1', 1);
>>
>> select * from test1 where flag<100*0.5;
>>
>> then ignite crashed.
>>
>> https://issues.apache.org/jira/browse/IGNITE-15943
>>
>>
>>


Re: [2.11.0]Regression: Ignite node crash(CorruptedTreeException: B+Tree is corrupted)

2021-11-18 Thread Maksim Timonin
Hi, I failed to reproduce it. Could you please provide more details? Some
about nodes configuration, cache configurations, etc.

Also could you please provide a full stack trace of the exception?

Thanks

On Thu, Nov 18, 2021 at 11:50 AM 18624049226 <18624049...@163.com> wrote:

> Hi,
>
> The following issue occurred in version 2.11 and not in version 2.10.
>
> create table test1(id varchar primary key, flag long);
>
> create index index_test1_flag on test1 (flag);
>
> insert into test1(id, flag) values('a1', 1);
>
> select * from test1 where flag<100*0.5;
>
> then ignite crashed.
>
> https://issues.apache.org/jira/browse/IGNITE-15943
>
>
>


Re: ML task in Ignite

2021-11-18 Thread Maksim Timonin
Hi, there are some:
https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/ml

On Thu, Nov 18, 2021 at 11:35 AM Piper H  wrote:

> Hello
>
> Do you guys have a sample case to show how to run ML tasks in Ignite?
> We have the ignite cluster in use. I want to know if this can run machine
> learning as well.
>
> Thanks & Regards,
> Piper
>


Re: Run ignite kubernetes pod on java 11

2021-11-18 Thread Maksim Timonin
Hi, you should move ignite-kubernetes (and others optional libs you're
going to use) from optional to $IGNITE_HOME/libs/

On Wed, Nov 17, 2021 at 6:50 PM Surinder Mehra  wrote:

> Thanks for the suggestion. I tried to fix dependencies and it works with
> default-ignite-config.xml, which means when CONFIG_URI ARG is not provided.
> When I pass my custom node configuration file which contains KubernetesIP
> finder, the container fails to find the K8 Ip finder class.
> I have verified that the container /libs/optional/ directory has
> ignite-kuberenetes and zookeper jars in it.
> So is it genuine error or if I deploy this in kubernetes as stateful set
> after using this image, it would work
>
> Here is my updated Dockerfile
>
> FROM adoptopenjdk/openjdk11
>
> # Settings
> ARG IGNITE_CFG_XML="node-configuration.xml"
> ARG IGNITE_VERSION="2.11.0"
> ENV IGNITE_HOME /opt/ignite/apache-ignite
> ENV CONFIG_URI config/$IGNITE_CFG_XML
> # Disabling quiet mode.
> ENV IGNITE_QUIET=false
> WORKDIR /opt/ignite
>
> # Add missing software
> RUN apt-get update &&\
> apt-get install bash && \
> apt-get install -y wget && \
> apt-get install unzip && \
> wget 
> https://dlcdn.apache.org//ignite/${IGNITE_VERSION}/apache-ignite-${IGNITE_VERSION}-bin.zip
>  && \
> unzip -o apache-ignite-${IGNITE_VERSION}-bin.zip && \
> mv apache-ignite-${IGNITE_VERSION}-bin apache-ignite && \
> rm apache-ignite-${IGNITE_VERSION}-bin.zip
> # Copy main binary archive
> #COPY apache-ignite* apache-ignite
>
> # Copy sh files and set permission
> COPY run.sh $IGNITE_HOME/
> COPY ./$IGNITE_CFG_XML $IGNITE_HOME/config
> # Grant permission to copy optional libs
> RUN chmod 777 ${IGNITE_HOME}/libs
>
> # Grant permission to create work directory
> RUN chmod 777 ${IGNITE_HOME}
>
> # Grant permission to execute entry point
> RUN chmod 555 $IGNITE_HOME/run.sh
>
> # Entry point
> CMD $IGNITE_HOME/run.sh
>
> # Container port exposure
> EXPOSE 11211 47100 47500 49112 10800 8080
>
>
> On Wed, Nov 17, 2021 at 5:49 PM Surinder Mehra  wrote:
>
>> Yes, thanks for replying. I tried doing that but it assumes the ignite
>> binary is present in the local directory somewhere. " COPY
>> apache-ignite* apache-ignite  "
>> Could you please explain
>> 1. where is it reading ignite binaries from and
>> 2. ignite binaries zip doesn't have run.sh. If I copy run.sh as well
>> along with Dockerfile from github, does it have any other dependency.
>>
>> On Wed, Nov 17, 2021 at 4:51 PM Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> I’d just take the original Dockerfile (Dockerfile
>>> )
>>> and replace the reference to Java 8 with Java 11
>>>
>>> On 17 Nov 2021, at 10:50, Surinder Mehra  wrote:
>>>
>>> Hi,
>>> I tried to build one with two approaches. I was thinking the 1st one is
>>> simple and should work but it didn't so I tried the 2nd approach which
>>> seems to be missing something. Can you point out the missing piece please.
>>>
>>> 1. Extend base image and update java home as below
>>> FROM apacheignite/ignite
>>>
>>> ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
>>> # Install OpenJDK-11
>>> RUN apt-get update && \
>>>apt-get install -y openjdk-11-jdk && \
>>>export JAVA_HOME && \
>>>apt-get clean;
>>> RUN export JAVA_HOME="$(dirname $(dirname $(readlink -f $(which java"
>>>
>>> This throws an error "apt-get not found". I tried with yum as well, but
>>> it throws the same error. Not sure why it doesn't have package manager
>>>
>>> 2. On the 2nd approach I tried to use jdk11 as base image and install
>>> ignite on it and run /bin/ignite.sh. It throws an error saying it cant find
>>> executable on path.
>>>
>>> FROM adoptopenjdk/openjdk11
>>>
>>> # Set Apache Ignite configuration file name.
>>> ARG IGNITE_CFG_XML="node-configuration.xml"
>>>
>>> # Set Apache Ignite version.
>>> ARG IGNITE_VERSION="2.11.0"
>>>
>>> # Set IGNITE_HOME variable.
>>> ENV IGNITE_HOME /opt/ignite/apache-ignite-${IGNITE_VERSION}-bin
>>>
>>> # Set a path to the Apache Ignite configuration file. Use the run.sh script 
>>> below:
>>> ENV CONFIG_URI ${IGNITE_HOME}/config/$IGNITE_CFG_XML
>>>
>>> # Make sure the Kubernetes lib is copied to the 'libs' folder.
>>> #ENV OPTION_LIBS ignite-kubernetes
>>>
>>> # Disabling quiet mode.
>>> ENV IGNITE_QUIET=false
>>> WORKDIR /opt/ignite
>>> # Install or update needed tools.
>>> #RUN apt-get update && apt-get install -y --no-install-recommends unzip
>>> RUN apt-get update && \
>>>  apt-get install -y wget && \
>>>  apt-get install unzip && \
>>>  wget 
>>> https://dlcdn.apache.org//ignite/${IGNITE_VERSION}/apache-ignite-${IGNITE_VERSION}-bin.zip
>>> # Creating and setting a working directory for following commands.
>>>
>>> # Copying local Apache Ignite build to the docker image.
>>> #COPY ./apache-ignite-${IGNITE_VERSION}-bin.zip 
>>> apache-ignite-${IGNITE_VERSION}-bin.zip
>>>
>>> # Unpacking the 

Re: Re client node connects to server nodes behind NAT

2021-11-15 Thread Maksim Timonin
Hi!

Unfortunately, support of peer class loading and data streams is limited
for thin clients. AFAIK, .Net thin client supports data streamer.

I found that Ignite has a special mode of running client nodes
`forceClientToServerConnections` [1]. But this feature is marked as
experimental since 2.9 and I don't see any updates there. Actually, I'm not
aware about this feature much. I found the feature discussion on dev list
[2]. Maybe somebody from the community can say more about this feature. But
looks like it can be a solution in your case.

[1]
https://ignite.apache.org/docs/latest/clustering/running-client-nodes-behind-nat
[2] https://www.mail-archive.com/dev@ignite.apache.org/msg44990.html

On Mon, Nov 15, 2021 at 12:15 PM MJ <6733...@qq.com> wrote:

> That would be peer class loading && data stream .
>
> Thanks
>
> -- 原始邮件 --
> *发件人:* "user" ;
> *发送时间:* 2021年11月15日(星期一) 下午3:59
> *收件人:* "user";
> *主题:* Re: client node connects to server nodes behind NAT
>
> Hi, I'll touch on a similar topic a little tomorrow on IgniteSummit [1],
> you're welcome :)
>
> But, a thick client is part of topology, and it has to be able to connect
> every node in the cluster directly, then thick clients have to be run
> within the same kubernetes cluster. Thin clients were designed to eliminate
> this problem.
>
> What features of thick clients do you want to use?
>
> [1] https://ignite-summit.org/sessions/293596
>
> On Mon, Nov 15, 2021 at 3:50 AM MJ <6733...@qq.com> wrote:
>
>> Hi,
>>
>> Is that possible for a non-Kubernetes client node connects to server
>> nodes within Kubernetes ?
>>
>> have read below docs seems impossible
>>
>> https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment#connecting-client-nodes
>>
>> Have tried with thin client outside of Kubernetes - that works fine
>> client node(thick client) - always throw exceptions, most likely the
>> internal ips bebind NAT cannot be detected from external , is there any
>> workaround to implement that non-Kubernetes client node connects to server
>> nodes within Kubernetes ? I'd like to utilise the power features of thick
>> client. and They can be depoloyed everywhere if there is the way of making
>> it.
>>
>>
>>
>> Thanks,
>> Ma Jun
>>
>


Re: client node connects to server nodes behind NAT

2021-11-14 Thread Maksim Timonin
Hi, I'll touch on a similar topic a little tomorrow on IgniteSummit [1],
you're welcome :)

But, a thick client is part of topology, and it has to be able to connect
every node in the cluster directly, then thick clients have to be run
within the same kubernetes cluster. Thin clients were designed to eliminate
this problem.

What features of thick clients do you want to use?

[1] https://ignite-summit.org/sessions/293596

On Mon, Nov 15, 2021 at 3:50 AM MJ <6733...@qq.com> wrote:

> Hi,
>
> Is that possible for a non-Kubernetes client node connects to server nodes
> within Kubernetes ?
>
> have read below docs seems impossible
>
> https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment#connecting-client-nodes
>
> Have tried with thin client outside of Kubernetes - that works fine
> client node(thick client) - always throw exceptions, most likely the
> internal ips bebind NAT cannot be detected from external , is there any
> workaround to implement that non-Kubernetes client node connects to server
> nodes within Kubernetes ? I'd like to utilise the power features of thick
> client. and They can be depoloyed everywhere if there is the way of making
> it.
>
>
>
> Thanks,
> Ma Jun
>


Re: Problem with SqlFieldsQuery

2021-10-27 Thread Maksim Timonin
Hi, Prasad!

I created a simple gist with your example, please have a look -
https://gist.github.com/timoninmaxim/534d36b23542140555901ddc0e853d3b

Some notes here:
1. QueryEntity has `setTableName` property. Set up it and then you can
simply use it in your SQL query instead of a simple class name. It's more
clear.
2. Value type is required to be set up with a full class name to correctly
match insertion values with related tables. You can create a class
PersonSQL in another package, and try to put instances to `cache`. Then
ScanQuery will show you them, but they won't be part of your `table`
(thanks the specified value type), and `select count(*)` from *table*
returns 0.

Also I found that you defined QueryEntity with `keyFieldName = name`, but
you put items to the cache with `key = ssn`. Just let you know, it may lead
to some misses in table / cache.


On Wed, Oct 27, 2021 at 12:53 AM Prasad Kommoju 
wrote:

> Hi Maksim,
>
>
>
> I did “Reply” and that must have retained only user@ignite.
>
>
>
> I was redacting the name (replace with blah) but it is the same in both
> places. The thing that is not clear to me is should the name in the config
> file (and the code) be fully qualified class name or simple name
> (PersonSQL). One of them seems to throw failed to set schema and the other
> empty results.
>
>
>
>
>
>
>
> ---------
>
> Regards,
>
> Prasad Kommoju
>
>
>
> *From:* Maksim Timonin 
> *Sent:* Tuesday, October 26, 2021 2:36 PM
> *To:* user@ignite.apache.org; Prasad Kommoju 
> *Subject:* Re: Problem with SqlFieldsQuery
>
>
>
> Hi, Prasad!
>
>
>
> Looks like you missed the topic, and posted your code to another thread.
> But nevertheless, did you check the package name of PersonSQL class in the
> CacheConfiguration and in your insertion code?
>
>
>
> On Tue, Oct 26, 2021 at 10:15 AM Maksim Timonin 
> wrote:
>
> Hi, Prasad!
>
>
>
> Could you please show how you insert data to the table?
>
>
>
> As I see you defined table with "com.*blah*.sfqx.SqlFieldQueryXML$PersonSQL"
> but cache scan returns objects with value type 
> "com.*futurewei*.sfqx.SqlFieldQueryXML$PersonSQL".
> Can this misprint be a reason?
>
>
>
> On Tue, Oct 26, 2021 at 5:03 AM Prasad Kommoju 
> wrote:
>
> I create a cache with QueryEntities (through ignite configuration file)
> and use SqlFieldsQuery to query it.
>
>
>
> I can see the cache in ignitevisor and it appears as table through sqlline
> interface. While ignitevisor shows the data sqlline tool does not.
>
>
>
> Here is the configuration:
>
>
>
> …
>
> 
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  value="java.lang.String"/>
>
>
>
> 
>
>
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>  value="java.lang.String"/>
>
>  value="java.lang.String"/>
>
>  value="com.futurewei.sfqx.Address"/>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.cache.QueryIndex">
>
> 
>
> 
>
> 
>
> …
>
>
>
> Here is the ignitevisor output (truncated):
>
> visor> cache -scan -c=@c1
>
> Entries in  cache: PersonSQL
>
>
> ++
>
> |Key Class | Key  |  

Re: Problem with SqlFieldsQuery

2021-10-26 Thread Maksim Timonin
Hi, Prasad!

Looks like you missed the topic, and posted your code to another thread.
But nevertheless, did you check the package name of PersonSQL class in the
CacheConfiguration and in your insertion code?

On Tue, Oct 26, 2021 at 10:15 AM Maksim Timonin 
wrote:

> Hi, Prasad!
>
> Could you please show how you insert data to the table?
>
> As I see you defined table with "com.*blah*.sfqx.SqlFieldQueryXML$PersonSQL"
> but cache scan returns objects with value type 
> "com.*futurewei*.sfqx.SqlFieldQueryXML$PersonSQL".
> Can this misprint be a reason?
>
> On Tue, Oct 26, 2021 at 5:03 AM Prasad Kommoju 
> wrote:
>
>> I create a cache with QueryEntities (through ignite configuration file)
>> and use SqlFieldsQuery to query it.
>>
>>
>>
>> I can see the cache in ignitevisor and it appears as table through
>> sqlline interface. While ignitevisor shows the data sqlline tool does not.
>>
>>
>>
>> Here is the configuration:
>>
>>
>>
>> …
>>
>> 
>>
>> > class="org.apache.ignite.configuration.DataRegionConfiguration">
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>>
>>
>> 
>>
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> > value="java.lang.String"/>
>>
>>
>>
>> 
>>
>>
>>
>> 
>>
>> 
>>
>>
>>
>> 
>>
>> 
>>
>> 
>>
>> > value="java.lang.String"/>
>>
>> > value="java.lang.String"/>
>>
>> > value="com.futurewei.sfqx.Address"/>
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> > class="org.apache.ignite.cache.QueryIndex">
>>
>> 
>>
>> 
>>
>> 
>>
>> …
>>
>>
>>
>> Here is the ignitevisor output (truncated):
>>
>> visor> cache -scan -c=@c1
>>
>> Entries in  cache: PersonSQL
>>
>>
>> ++
>>
>> |Key Class | Key  |   Value Class
>> |
>> Value
>> |
>>
>>
>> ++
>>
>> | java.lang.String | 7336-18-3968 | o.a.i.i.binary.BinaryObjectImpl |
>> com.futurewei.sfqx.SqlFieldQueryXML$PersonSQL [hash=-900842615,
>> ssn=7336-18-3968, name=uuykixzs,
>> address=com.blah.sfqx.SqlFieldQueryXML$Address [idHash=302301205,
>> hash=239196030, houseNumber=2606, streetName=xjzxzzpazdzx, city=uwjitlprkd,
>> state=dzhiiisjq, zip=73550]]   |
>>
>> | java.lang.String | 6198-10-5000 | o.a.i.i.binary.BinaryObjectImpl |
>> com.futurewei.sfqx.SqlFieldQueryXML$PersonSQL [hash=426078934,
>> ssn=6198-10-5000, name=lwthwezu,
>> address=com.blah.sfqx.SqlFieldQueryXML$Address [idHash=1460034609,
>> hash=-1811594149, houseNumbe

Re: Problem with SqlFieldsQuery

2021-10-26 Thread Maksim Timonin
Hi, Prasad!

Could you please show how you insert data to the table?

As I see you defined table with "com.*blah*.sfqx.SqlFieldQueryXML$PersonSQL"
but cache scan returns objects with value type
"com.*futurewei*.sfqx.SqlFieldQueryXML$PersonSQL".
Can this misprint be a reason?

On Tue, Oct 26, 2021 at 5:03 AM Prasad Kommoju 
wrote:

> I create a cache with QueryEntities (through ignite configuration file)
> and use SqlFieldsQuery to query it.
>
>
>
> I can see the cache in ignitevisor and it appears as table through sqlline
> interface. While ignitevisor shows the data sqlline tool does not.
>
>
>
> Here is the configuration:
>
>
>
> …
>
> 
>
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  value="java.lang.String"/>
>
>
>
> 
>
>
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>  value="java.lang.String"/>
>
>  value="java.lang.String"/>
>
>  value="com.futurewei.sfqx.Address"/>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.cache.QueryIndex">
>
> 
>
> 
>
> 
>
> …
>
>
>
> Here is the ignitevisor output (truncated):
>
> visor> cache -scan -c=@c1
>
> Entries in  cache: PersonSQL
>
>
> ++
>
> |Key Class | Key  |   Value Class
> |
> Value
> |
>
>
> ++
>
> | java.lang.String | 7336-18-3968 | o.a.i.i.binary.BinaryObjectImpl |
> com.futurewei.sfqx.SqlFieldQueryXML$PersonSQL [hash=-900842615,
> ssn=7336-18-3968, name=uuykixzs,
> address=com.blah.sfqx.SqlFieldQueryXML$Address [idHash=302301205,
> hash=239196030, houseNumber=2606, streetName=xjzxzzpazdzx, city=uwjitlprkd,
> state=dzhiiisjq, zip=73550]]   |
>
> | java.lang.String | 6198-10-5000 | o.a.i.i.binary.BinaryObjectImpl |
> com.futurewei.sfqx.SqlFieldQueryXML$PersonSQL [hash=426078934,
> ssn=6198-10-5000, name=lwthwezu,
> address=com.blah.sfqx.SqlFieldQueryXML$Address [idHash=1460034609,
> hash=-1811594149, houseNumber=9161, streetName=npruuwxhwbai,
> city=yxexraxvuu, state=cgxcrypcy, zip=91752]] |
>
> | java.lang.String | 9448-3-1310  | o.a.i.i.binary.BinaryObjectImpl |
> com.futurewei.sfqx.SqlFieldQueryXML$PersonSQL [hash=402062681,
> ssn=9448-3-1310, name=pyaujxzt,
> address=com.blah.sfqx.SqlFieldQueryXML$Address [idHash=710697527,
> hash=1654079158, houseNumber=2267, streetName=xpgtnbzngftv,
> city=flbtopwban, state=jkdrlxwqj, zip=11144]]|
>
> | java.lang.String | 9880-7-3532
>
>
>
> Here is the sqlline output:
>
> 0: jdbc:ignite:thin://127.0.0.1:10800> !tables
>
>
> +---+-+-++-+--++---+---++
>
> | TABLE_CAT | TABLE_SCHEM | TABLE_NAME  | TABLE_TYPE |
> REMARKS | TYPE_CAT | TYPE_SCHEM | TYPE_NAME | SELF_REFERENCING_COL_NAME |
> REF_GENERATION |
>
>
> +---+-+-++-+--++---+---++
>
> | IGNITE| PersonSQL   | PERSONSQL   | TABLE
> | |  ||   |
> ||
>
> | IGNITE| SYS | BASELINE_NODES  | VIEW
> | |  ||   |
> ||
>
> | IGNITE| SYS | BINARY_METADATA
>
> …
>
> 0: jdbc:ignite:thin://127.0.0.1:10800> 

Re: The code inside the CacheEntryProcessor executes multiple times. Why?

2021-10-20 Thread Maksim Timonin
Hi,

Yes, you're correct that this method will be part of the public API since
2.12 only. But the solution is registering this class beforehand. You can
do it, for example, by putting an instance of this class to cache.

On Tue, Oct 19, 2021 at 11:42 AM 38797715 <38797...@qq.com> wrote:

> As far as I know, this method is added in the following ticket and has not
> been published yet:
>
> https://issues.apache.org/jira/browse/IGNITE-15065
>
> Is there any alternative solution in the existing release?
> 在 2021/10/19 15:34, Maksim Timonin 写道:
>
> Hi!
>
> You use an unregistered class in your entry processor: ItemClass1. You
> should register it before using it. The code does the job.
>
> `ignite.binary().registerClass(ItemClass1.class);`
>
> When class isn't registered Ignite does it by itself. But it requires to
> replay your code after it finds that class is unregistered. You can help
> cluster to register this class by self.
>
>
>
>
> On Tue, Oct 19, 2021 at 3:52 AM 38797715 <38797...@qq.com> wrote:
>
>> Any feedback?
>> 在 2021/10/14 15:03, 38797715 写道:
>>
>> Hi,
>>
>> The internal code of CacheEntryProcessor in the attachment has been
>> executed multiple times. Why?
>> Is there any simple way to solve this problem?
>>
>>


Re: The code inside the CacheEntryProcessor executes multiple times. Why?

2021-10-19 Thread Maksim Timonin
Hi!

You use an unregistered class in your entry processor: ItemClass1. You
should register it before using it. The code does the job.

`ignite.binary().registerClass(ItemClass1.class);`

When class isn't registered Ignite does it by itself. But it requires to
replay your code after it finds that class is unregistered. You can help
cluster to register this class by self.




On Tue, Oct 19, 2021 at 3:52 AM 38797715 <38797...@qq.com> wrote:

> Any feedback?
> 在 2021/10/14 15:03, 38797715 写道:
>
> Hi,
>
> The internal code of CacheEntryProcessor in the attachment has been
> executed multiple times. Why?
> Is there any simple way to solve this problem?
>
>


Re: Missing libs/optional/ignite-kafka in latest (2.11.0) binary package of Apache Ignite

2021-10-12 Thread Maksim Timonin
Hi, Dany!

Most of the optional libraries were moved from ignite to ignite-extensions
[1]. You could find them in maven [2].

There is a simple example how to include them in project with dependency in
pom.xml in your project: https://ignite.apache.org/download.cgi#extensions

[1] https://github.com/apache/ignite-extensions/tree/ignite-kafka-ext-1.0.0
[2] For example, for kafka -
https://mvnrepository.com/artifact/org.apache.ignite/ignite-kafka-ext

On Tue, Oct 12, 2021 at 11:57 PM Dany Ayotte  wrote:

>
> Hi,
>
> Many optional libraries are missing in the latest 2.11.0 binary
> distribution of Apache Ignite (example: ignite-kafka). Any explanation?
>
> Thank you
>
>


Re: Run examples in Idea

2021-09-22 Thread Maksim Timonin
Anton, sorry for little misleading. The examples project has a maven
profile with different names: "scala" or "spark-2.4".

Try use them to enable sources directory. Thanks

On Wed, Sep 22, 2021 at 12:19 PM Maksim Timonin 
wrote:

> Hi,
>
> There are some maven profiles in pom.xml. You should enable profiles
> all-java,all-scala for building most modules.
>
> On Wed, Sep 22, 2021 at 12:01 PM Антон  wrote:
>
>> Hi,
>> one more question regarding Idea - how to build sources from folders
>> other than src/main/java, for example src/main/spark? These folders were
>> not added as source directories upon project opening. And when I add it
>> manually there are no appropriate dependencies in examples/pom.xml (and in
>> its' parents) anyway.
>>
>> 09.09.2021, 16:38, "Pavel Tupitsyn" :
>>
>> Hi,
>>
>> This happens - Idea picks up compiler options from the 9+ profile and
>> sticks with them
>> - Nuke everything (git clean -fdx), build with Maven, open Idea, uncheck
>> java-9+ profile, Reload Maven Projects
>> - Still does not work? Open Settings -> Build, Execution, Deployment ->
>> Compiler -> Java Compiler, and clean up "override compiler parameters
>> per-module"
>>
>> On Thu, Sep 9, 2021 at 4:29 PM Антон  wrote:
>>
>> Hi. I'm having trouble running examples in Idea with Java 1.8. I've built
>> project succesfully using maven, but when I'm trying to Run or Debug some
>> file in Idea it shows error message saying about invalid flag
>> "--add-exports", but I didn't add this flag. I even deleted profile "9+"
>> from pom.xml of parent project (which contains such flag) but it didn't
>> help. Please suggest some way to fix this.
>>
>> Full log:
>> Executing pre-compile tasks...
>> Loading Ant configuration...
>> Running Ant tasks...
>> Running 'before' tasks
>> Checking sources
>> Copying resources... [ignite-examples]
>> Parsing java... [ignite-examples]
>> java: invalid flag: --add-exports
>> java: Errors occurred while compiling module 'ignite-examples'
>> Checking dependencies... [ignite-examples]
>> Dependency analysis found 0 affected files
>> javac 1.8.0_201 was used to compile java sources
>> Finished, saving caches...
>> Compilation failed: errors: 1; warnings: 0
>> Executing post-compile tasks...
>> Loading Ant configuration...
>> Running Ant tasks...
>> Synchronizing output directories...
>> 09.09.2021 16:17 - Build completed with 1 error and 0 warnings in 3 sec,
>> 66 ms
>>
>>


Re: Run examples in Idea

2021-09-22 Thread Maksim Timonin
Hi,

There are some maven profiles in pom.xml. You should enable profiles
all-java,all-scala for building most modules.

On Wed, Sep 22, 2021 at 12:01 PM Антон  wrote:

> Hi,
> one more question regarding Idea - how to build sources from folders other
> than src/main/java, for example src/main/spark? These folders were not
> added as source directories upon project opening. And when I add it
> manually there are no appropriate dependencies in examples/pom.xml (and in
> its' parents) anyway.
>
> 09.09.2021, 16:38, "Pavel Tupitsyn" :
>
> Hi,
>
> This happens - Idea picks up compiler options from the 9+ profile and
> sticks with them
> - Nuke everything (git clean -fdx), build with Maven, open Idea, uncheck
> java-9+ profile, Reload Maven Projects
> - Still does not work? Open Settings -> Build, Execution, Deployment ->
> Compiler -> Java Compiler, and clean up "override compiler parameters
> per-module"
>
> On Thu, Sep 9, 2021 at 4:29 PM Антон  wrote:
>
> Hi. I'm having trouble running examples in Idea with Java 1.8. I've built
> project succesfully using maven, but when I'm trying to Run or Debug some
> file in Idea it shows error message saying about invalid flag
> "--add-exports", but I didn't add this flag. I even deleted profile "9+"
> from pom.xml of parent project (which contains such flag) but it didn't
> help. Please suggest some way to fix this.
>
> Full log:
> Executing pre-compile tasks...
> Loading Ant configuration...
> Running Ant tasks...
> Running 'before' tasks
> Checking sources
> Copying resources... [ignite-examples]
> Parsing java... [ignite-examples]
> java: invalid flag: --add-exports
> java: Errors occurred while compiling module 'ignite-examples'
> Checking dependencies... [ignite-examples]
> Dependency analysis found 0 affected files
> javac 1.8.0_201 was used to compile java sources
> Finished, saving caches...
> Compilation failed: errors: 1; warnings: 0
> Executing post-compile tasks...
> Loading Ant configuration...
> Running Ant tasks...
> Synchronizing output directories...
> 09.09.2021 16:17 - Build completed with 1 error and 0 warnings in 3 sec,
> 66 ms
>
>


Re: Ignite crashed with CorruptedTreeException

2021-06-16 Thread Maksim Timonin
Hi, Marcus!

Great! You're welcome.

On Wed, Jun 16, 2021 at 10:08 AM Lo, Marcus  wrote:

> Hi Maksim,
>
>
>
> Thanks. We have tested the workaround and it mitigates the issue. 
>
>
>
> Regards,
>
> Marcus
>
>
>
> *From:* [gmail.com] Maksim Timonin 
> *Sent:* Thursday, June 10, 2021 5:52 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite crashed with CorruptedTreeException
>
>
>
> Hi, Marcus! I've found a bug. It was already fixed within this ticket
> https://issues.apache.org/jira/browse/IGNITE-14451
> <https://urldefense.com/v3/__https:/issues.apache.org/jira/browse/IGNITE-14451__;!!Jkho33Y!2lE2mKO1rPq_tr8J_acm_2Mv5oZTq6xaapgVJ-IHWfrlVJ4S-U7_ZtNwyLi9IA$>.
> So it will be available in Ignite 2.11.
>
>
>
> Reason is that you declare fields in your table in order (viewId, status)
> while PK has order (status, viewId). So, change the order in the table and
> it will be fine. This bug affects only Primary Key indexes, so it's safe to
> declare fields in different order in secondary indexes.
>
>
>
> On Thu, Jun 10, 2021 at 10:58 AM Maksim Timonin 
> wrote:
>
> hi, Marcus!
>
>
>
> Thank you for the reproducer, I've succeed in reproducing it. Create a
> ticket for that https://issues.apache.org/jira/browse/IGNITE-14869
> <https://urldefense.com/v3/__https:/issues.apache.org/jira/browse/IGNITE-14869__;!!Jkho33Y!2lE2mKO1rPq_tr8J_acm_2Mv5oZTq6xaapgVJ-IHWfrlVJ4S-U7_ZtN-vi96WA$>
>
>
>
> I will return to you with a workaround, after finding a reason.
>
>
>
> On Thu, Jun 10, 2021 at 3:18 AM Lo, Marcus  wrote:
>
> Hi Ivan, Maksim,
>
> Here is the reproducer:
>
> import org.apache.ignite.Ignition;
> import org.apache.ignite.binary.BinaryObject;
> import org.apache.ignite.binary.BinaryObjectBuilder;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.client.ClientCache;
> import org.apache.ignite.client.IgniteClient;
> import org.apache.ignite.configuration.BinaryConfiguration;
> import org.apache.ignite.configuration.ClientConfiguration;
> import org.junit.jupiter.api.Test;
>
> import java.sql.Timestamp;
> import java.time.Instant;
> import java.util.*;
> import java.util.stream.IntStream;
>
> import static java.util.stream.Collectors.toList;
>
> public class Reproducer {
>
> @Test
> public void reproduce() throws InterruptedException {
> ClientConfiguration config = constructIgniteThinClientConfig();
> IgniteClient ignite = Ignition.startClient(config);
>
> List uuids = IntStream.range(0, 200).mapToObj((i) ->
> UUID.randomUUID()).collect(toList());
>
> while (true) {
> upsertLimitViewData(ignite, uuids);
> Thread.sleep(1000);
> }
> }
>
> private void upsertLimitViewData(IgniteClient ignite, List
> uuids) {
> System.out.println("[" + Instant.now() + "] upserting data... " +
> Thread.currentThread().getName());
>
> ClientCache cache =
> ignite.cache("LimitViewStatusCache").withKeepBinary();
> QueryEntity queryEntity =
> cache.getConfiguration().getQueryEntities()[0];
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder(queryEntity.getKeyType());
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder(queryEntity.getValueType());
> HashMap valueMap = new HashMap<>();
>
> for (int i = 0; i < 200; i++) {
> BinaryObject key = keyBuilder
> .setField("viewId", uuids.get(i))
> .setField("status", "moo")
> .build();
>
> BinaryObject value = valueBuilder
> .setField("batchId", new Random().nextInt())
> .setField("instance", Integer.toString(new
> Random().nextInt()))
> .setField("nodes", Integer.toString(new
> Random().nextInt()))
> .setField("eqtgContext", Integer.toString(new
> Random().nextInt()))
> .setField("lastUpdateTime",
> Timestamp.from(Instant.now()))
> .build();
>
> valueMap.put(key, value);
> }
>
> cache.putAll(valueMap);
> }
>
> private ClientConfiguration constructIgniteThinClientConfig() {
> return
> new ClientConfiguration()
> .setAddresses("xxx:10800")
> .setPartitionAwarenessEnabled(false)
> .setBinaryConfiguration(new
> BinaryConfiguration().setCo

Re: Ignite crashed with CorruptedTreeException

2021-06-10 Thread Maksim Timonin
Hi, Marcus! I've found a bug. It was already fixed within this ticket
https://issues.apache.org/jira/browse/IGNITE-14451. So it will be
available in Ignite 2.11.

Reason is that you declare fields in your table in order (viewId, status)
while PK has order (status, viewId). So, change the order in the table and
it will be fine. This bug affects only Primary Key indexes, so it's safe to
declare fields in different order in secondary indexes.

On Thu, Jun 10, 2021 at 10:58 AM Maksim Timonin 
wrote:

> hi, Marcus!
>
> Thank you for the reproducer, I've succeed in reproducing it. Create a
> ticket for that https://issues.apache.org/jira/browse/IGNITE-14869
>
> I will return to you with a workaround, after finding a reason.
>
> On Thu, Jun 10, 2021 at 3:18 AM Lo, Marcus  wrote:
>
>> Hi Ivan, Maksim,
>>
>> Here is the reproducer:
>>
>> import org.apache.ignite.Ignition;
>> import org.apache.ignite.binary.BinaryObject;
>> import org.apache.ignite.binary.BinaryObjectBuilder;
>> import org.apache.ignite.cache.QueryEntity;
>> import org.apache.ignite.client.ClientCache;
>> import org.apache.ignite.client.IgniteClient;
>> import org.apache.ignite.configuration.BinaryConfiguration;
>> import org.apache.ignite.configuration.ClientConfiguration;
>> import org.junit.jupiter.api.Test;
>>
>> import java.sql.Timestamp;
>> import java.time.Instant;
>> import java.util.*;
>> import java.util.stream.IntStream;
>>
>> import static java.util.stream.Collectors.toList;
>>
>> public class Reproducer {
>>
>> @Test
>> public void reproduce() throws InterruptedException {
>> ClientConfiguration config = constructIgniteThinClientConfig();
>> IgniteClient ignite = Ignition.startClient(config);
>>
>> List uuids = IntStream.range(0, 200).mapToObj((i) ->
>> UUID.randomUUID()).collect(toList());
>>
>> while (true) {
>> upsertLimitViewData(ignite, uuids);
>> Thread.sleep(1000);
>> }
>> }
>>
>> private void upsertLimitViewData(IgniteClient ignite, List
>> uuids) {
>> System.out.println("[" + Instant.now() + "] upserting data... " +
>> Thread.currentThread().getName());
>>
>> ClientCache cache =
>> ignite.cache("LimitViewStatusCache").withKeepBinary();
>> QueryEntity queryEntity =
>> cache.getConfiguration().getQueryEntities()[0];
>> BinaryObjectBuilder keyBuilder =
>> ignite.binary().builder(queryEntity.getKeyType());
>> BinaryObjectBuilder valueBuilder =
>> ignite.binary().builder(queryEntity.getValueType());
>> HashMap valueMap = new HashMap<>();
>>
>> for (int i = 0; i < 200; i++) {
>> BinaryObject key = keyBuilder
>> .setField("viewId", uuids.get(i))
>> .setField("status", "moo")
>> .build();
>>
>> BinaryObject value = valueBuilder
>> .setField("batchId", new Random().nextInt())
>> .setField("instance", Integer.toString(new
>> Random().nextInt()))
>> .setField("nodes", Integer.toString(new
>> Random().nextInt()))
>> .setField("eqtgContext", Integer.toString(new
>> Random().nextInt()))
>> .setField("lastUpdateTime",
>> Timestamp.from(Instant.now()))
>> .build();
>>
>> valueMap.put(key, value);
>> }
>>
>>     cache.putAll(valueMap);
>> }
>>
>> private ClientConfiguration constructIgniteThinClientConfig() {
>> return
>> new ClientConfiguration()
>> .setAddresses("xxx:10800")
>> .setPartitionAwarenessEnabled(false)
>> .setBinaryConfiguration(new
>> BinaryConfiguration().setCompactFooter(false))
>> .setUserName("xxx")
>> .setUserPassword("xxx");
>> }
>> }
>>
>>
>> Regards,
>> Marcus
>>
>> -Original Message-
>> From: [External] Maksim Timonin 
>> Sent: Thursday, June 10, 2021 12:31 AM
>> To: user@ignite.apache.org
>> Subject: Re: Ignite crashed with CorruptedTreeException
>>
>> Hi Marcus!
>>
>> Could you please provide a complete code that inserts data (either it is
>> SQL, or cache put, which types do you use, etc.). I've tried to reproduce
>> your case but failed.
>>
>> Thanks a lot!
>>
>>
>>
>> --
>> Sent from:
>> https://urldefense.com/v3/__http://apache-ignite-users.70518.x6.nabble.com/__;!!Jkho33Y!2v--HF_tnHWeR_0YefFDx-NcnoY3hkO-9G94IAXG23N6qzB_qz-rSYtuciav3A$
>>
>


Re: Ignite crashed with CorruptedTreeException

2021-06-10 Thread Maksim Timonin
hi, Marcus!

Thank you for the reproducer, I've succeed in reproducing it. Create a
ticket for that https://issues.apache.org/jira/browse/IGNITE-14869

I will return to you with a workaround, after finding a reason.

On Thu, Jun 10, 2021 at 3:18 AM Lo, Marcus  wrote:

> Hi Ivan, Maksim,
>
> Here is the reproducer:
>
> import org.apache.ignite.Ignition;
> import org.apache.ignite.binary.BinaryObject;
> import org.apache.ignite.binary.BinaryObjectBuilder;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.client.ClientCache;
> import org.apache.ignite.client.IgniteClient;
> import org.apache.ignite.configuration.BinaryConfiguration;
> import org.apache.ignite.configuration.ClientConfiguration;
> import org.junit.jupiter.api.Test;
>
> import java.sql.Timestamp;
> import java.time.Instant;
> import java.util.*;
> import java.util.stream.IntStream;
>
> import static java.util.stream.Collectors.toList;
>
> public class Reproducer {
>
> @Test
> public void reproduce() throws InterruptedException {
> ClientConfiguration config = constructIgniteThinClientConfig();
> IgniteClient ignite = Ignition.startClient(config);
>
> List uuids = IntStream.range(0, 200).mapToObj((i) ->
> UUID.randomUUID()).collect(toList());
>
> while (true) {
> upsertLimitViewData(ignite, uuids);
> Thread.sleep(1000);
> }
> }
>
> private void upsertLimitViewData(IgniteClient ignite, List
> uuids) {
> System.out.println("[" + Instant.now() + "] upserting data... " +
> Thread.currentThread().getName());
>
> ClientCache cache =
> ignite.cache("LimitViewStatusCache").withKeepBinary();
> QueryEntity queryEntity =
> cache.getConfiguration().getQueryEntities()[0];
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder(queryEntity.getKeyType());
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder(queryEntity.getValueType());
> HashMap valueMap = new HashMap<>();
>
> for (int i = 0; i < 200; i++) {
> BinaryObject key = keyBuilder
> .setField("viewId", uuids.get(i))
> .setField("status", "moo")
> .build();
>
> BinaryObject value = valueBuilder
> .setField("batchId", new Random().nextInt())
> .setField("instance", Integer.toString(new
> Random().nextInt()))
> .setField("nodes", Integer.toString(new
> Random().nextInt()))
> .setField("eqtgContext", Integer.toString(new
> Random().nextInt()))
> .setField("lastUpdateTime",
> Timestamp.from(Instant.now()))
> .build();
>
> valueMap.put(key, value);
> }
>
> cache.putAll(valueMap);
> }
>
> private ClientConfiguration constructIgniteThinClientConfig() {
> return
> new ClientConfiguration()
> .setAddresses("xxx:10800")
> .setPartitionAwarenessEnabled(false)
> .setBinaryConfiguration(new
> BinaryConfiguration().setCompactFooter(false))
> .setUserName("xxx")
> .setUserPassword("xxx");
> }
> }
>
>
> Regards,
> Marcus
>
> -Original Message-
> From: [External] Maksim Timonin 
> Sent: Thursday, June 10, 2021 12:31 AM
> To: user@ignite.apache.org
> Subject: Re: Ignite crashed with CorruptedTreeException
>
> Hi Marcus!
>
> Could you please provide a complete code that inserts data (either it is
> SQL, or cache put, which types do you use, etc.). I've tried to reproduce
> your case but failed.
>
> Thanks a lot!
>
>
>
> --
> Sent from:
> https://urldefense.com/v3/__http://apache-ignite-users.70518.x6.nabble.com/__;!!Jkho33Y!2v--HF_tnHWeR_0YefFDx-NcnoY3hkO-9G94IAXG23N6qzB_qz-rSYtuciav3A$
>


Re: Ignite crashed with CorruptedTreeException

2021-06-09 Thread Maksim Timonin
Hi Marcus!

Could you please provide a complete code that inserts data (either it is
SQL, or cache put, which types do you use, etc.). I've tried to reproduce
your case but failed.

Thanks a lot!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/