Re: Identifying Coordinator node

2024-02-01 Thread Alex Plehanov
Hello,

1. Coordinator is the oldest server node. So you can use something
like: ignite.cluster().forServers().forOldest().node()
2. Why can't you upgrade to the new Ignite version? Perhaps this
problem is already fixed. But if not, you can provide details for the
actual Ignite version and this will help to fix it.

чт, 1 февр. 2024 г. в 22:23, Ronak Shah :
>
> Hi,
> I am running into an issue where rolling deployment of my 5 node cluster is 
> failing. The issue is repeatable when the 1st node that gets re-deploy is a 
> coordinator node.
>
> In order to stop this behavior, I want to deploy on coordinator node last. 
> How can I know which node is a coordinator node? I dont find any APIs.
>
> I am using 2.10 version.
>
> Thanks for the help in advance.
>
> Ronak


Re: Possible WAL corruption on running system during K8s update

2023-07-18 Thread Alex Plehanov
Hello,

Which Ignite version do you use?
Please share exception details after "Exception during start processors,
node will be stopped and close connections" (there should be a reason in
the log, why the page delta can't be applied).

вт, 18 июл. 2023 г. в 05:05, Raymond Wilson :

> Hi,
>
> We run a dev/alpha stack of our application in Azure Kubernetes.
> Persistent storage is contained in Azure Files NAS storage volumes, one per
> server node.
>
> We ran an upgrade of Kubernetes today (from 1.24.9 to 1.26.3). During the
> update various pods were stopped and restarted as is normal for an update.
> This included nodes running the dev/alpha stack.
>
> At least one node (of a cluster of four server nodes in the cluster)
> failed to restart after the update, with the following logging:
>
>   2023-07-18 01:23:55.171 [1] INFRestoring checkpoint after logical
> recovery, will start physical recovery from back pointer: WALPointer
> [idx=2431, fileOff=209031823, len=29]
>  2023-07-18 01:23:55.205  [28] ERRFailed to apply page delta.
> rec=[PagesListRemovePageRecord [rmvdPageId=010100010057,
> pageId=010100010004, grpId=-1476359018, super=PageDeltaRecord
> [grpId=-1476359018, pageId=010100010004, super=WALRecord [size=41,
> chainSize=0, pos=WALPointer [idx=2431, fileOff=209169155, len=41],
> type=PAGES_LIST_REMOVE_PAGE
>  2023-07-18 01:23:55.217 [1] INFCleanup cache stores [total=0, left=0,
> cleanFiles=false]
>  2023-07-18 01:23:55.218 [1] ERRGot exception while starting (will
> rollback startup routine).
>  2023-07-18 01:23:55.218 [1] ERRException during start processors,
> node will be stopped and close connections
>
> I know Apache Ignite is very good at surviving 'Big Red Switch' scenarios,
> and we have our data regions configured with the strictest update protocol
> (full sync after each write), however it's possible the NAS implementation
> does something different!
>
> I think if we delete the WAL files from the nodes that won't restart then
> the node may be happy, though we will lose any updates since the last
> checkpoint (but then, it has low use and checkpoints are every 30-45
> seconds or so, so this won't be significant).
>
> Is this an error anyone else has noticed?
> Has anyone else had similar issues with Azure Files when using strict
> update/sync semantics?
>
> Thanks,
> Raymond.
>
> --
> 
> Raymond Wilson
> Trimble Distinguished Engineer, Civil Construction Software (CCS)
> 11 Birmingham Drive | Christchurch, New Zealand
> raymond_wil...@trimble.com
>
>
> 
>


Re: Re: SetLoacl is not work for Calcite

2023-06-30 Thread Alex Plehanov
Hitesh,

To unsubscribe, send a message to user-unsubscr...@ignite.apache.org
(not to user@ignite.apache.org).

пт, 30 июн. 2023 г. в 13:50, Hitesh Nandwana :
>
> Why It is not subscribing?
>
> On Fri, 30 Jun, 2023, 14:17 Alex Plehanov,  wrote:
>>
>> Hello,
>>
>> Patch to support setLocal flag for the calcite engine will be merged
>> soon (see ticket [1]).
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-19725
>>
>> пт, 30 июн. 2023 г. в 13:35, y :
>> >
>> > Thanks for your suggestion. We are considering the feasibility of your 
>> > proposed solution. For complex business production environments, it is 
>> > difficult to distinguish whether the query requires setLocal or not. 
>> > Anyway, thank you for your help.
>> >
>> >
>> >
>> >
>> >
>> > At 2023-06-30 15:18:07, "Stephen Darlington" 
>> >  wrote:
>> >
>> > If this is an important feature for you, the obvious solution would be to 
>> > use the H2 SQL engine (which is still the default, since the Calcite 
>> > engine is still considered beta).
>> >
>> > As noted in the documentation, you can even keep Calcite as the default 
>> > engine in your cluster and only route these queries to H2. 
>> > https://ignite.apache.org/docs/latest/SQL/sql-calcite#query_engine-hint
>> >
>> > On 30 Jun 2023, at 03:50, y  wrote:
>> >
>> > Hello.
>> >I'm sorry for it took so long time to reply to the message duing to I 
>> > missed some messages. For example, I have multiple nodes performing the 
>> > same computational tasks. The cache mode is  partition and the data is 
>> > cached by affinity_key. So the different nodes have different data and 
>> > node only query/calculate data from itself. If there is no setLocal, the 
>> > node will query data from other nodes, which is inefficient.  That's why i 
>> > need setLocal. What should I do if without setLocal?
>> >
>> > 
>> >
>> > Yours,
>> > Hu Tiany
>> > 2023/6/30
>> >
>> >
>> > At 2023-06-05 19:21:13, "Alex Plehanov"  wrote:
>> > >Hello,
>> > >
>> > >The Calcite-based SQL engine currently doesn't analyze any properties
>> > >of SqlFieldsQuery except "Sql", "Schema", "Args" and
>> > >"QueryInitiatorId". Some of the rest properties are useless for the
>> > >Calcite-based engine at all (for example, "DistributedJoins", since
>> > >all joins in the Calcite-based engine are distributed by default if
>> > >needed). But, perhaps, others can be useful. If you are really sure
>> > >that the "Local" property is necessary for the new SQL engine, feel
>> > >free to create a ticket and describe the reason why we need it.
>> > >
>> > >пн, 5 июн. 2023 г. в 12:05, y :
>> > >>
>> > >> Hello igniters,
>> > >>  Just like the title, setLocal seems invalid for Calcite 2.15. When I 
>> > >> set ‘setLocal = true’ and query data from one node, the result sets is 
>> > >> returned from all data nodes.  This problem is not present in version 
>> > >> 2.13 ,which not use Calcite. I'd like to know is this an error? If yes 
>> > >> it is, When will it be fixed?
>> > >>
>> > >> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query);
>> > >> fieldsQuery.setLocal(true);   // uneffective for this line
>> > >> List> rs = 
>> > >> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll();
>> > >>
>> > >>
>> >
>> >


Re: Re: SetLoacl is not work for Calcite

2023-06-30 Thread Alex Plehanov
Hello,

Patch to support setLocal flag for the calcite engine will be merged
soon (see ticket [1]).

[1] https://issues.apache.org/jira/browse/IGNITE-19725

пт, 30 июн. 2023 г. в 13:35, y :
>
> Thanks for your suggestion. We are considering the feasibility of your 
> proposed solution. For complex business production environments, it is 
> difficult to distinguish whether the query requires setLocal or not. Anyway, 
> thank you for your help.
>
>
>
>
>
> At 2023-06-30 15:18:07, "Stephen Darlington" 
>  wrote:
>
> If this is an important feature for you, the obvious solution would be to use 
> the H2 SQL engine (which is still the default, since the Calcite engine is 
> still considered beta).
>
> As noted in the documentation, you can even keep Calcite as the default 
> engine in your cluster and only route these queries to H2. 
> https://ignite.apache.org/docs/latest/SQL/sql-calcite#query_engine-hint
>
> On 30 Jun 2023, at 03:50, y  wrote:
>
> Hello.
>I'm sorry for it took so long time to reply to the message duing to I 
> missed some messages. For example, I have multiple nodes performing the same 
> computational tasks. The cache mode is  partition and the data is cached by 
> affinity_key. So the different nodes have different data and node only 
> query/calculate data from itself. If there is no setLocal, the node will 
> query data from other nodes, which is inefficient.  That's why i need 
> setLocal. What should I do if without setLocal?
>
> 
>
> Yours,
> Hu Tiany
> 2023/6/30
>
>
> At 2023-06-05 19:21:13, "Alex Plehanov"  wrote:
> >Hello,
> >
> >The Calcite-based SQL engine currently doesn't analyze any properties
> >of SqlFieldsQuery except "Sql", "Schema", "Args" and
> >"QueryInitiatorId". Some of the rest properties are useless for the
> >Calcite-based engine at all (for example, "DistributedJoins", since
> >all joins in the Calcite-based engine are distributed by default if
> >needed). But, perhaps, others can be useful. If you are really sure
> >that the "Local" property is necessary for the new SQL engine, feel
> >free to create a ticket and describe the reason why we need it.
> >
> >пн, 5 июн. 2023 г. в 12:05, y :
> >>
> >> Hello igniters,
> >>  Just like the title, setLocal seems invalid for Calcite 2.15. When I set 
> >> ‘setLocal = true’ and query data from one node, the result sets is 
> >> returned from all data nodes.  This problem is not present in version 2.13 
> >> ,which not use Calcite. I'd like to know is this an error? If yes it is, 
> >> When will it be fixed?
> >>
> >> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query);
> >> fieldsQuery.setLocal(true);   // uneffective for this line
> >> List> rs = 
> >> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll();
> >>
> >>
>
>


Re: SetLoacl is not work for Calcite

2023-06-05 Thread Alex Plehanov
Hello,

The Calcite-based SQL engine currently doesn't analyze any properties
of SqlFieldsQuery except "Sql", "Schema", "Args" and
"QueryInitiatorId". Some of the rest properties are useless for the
Calcite-based engine at all (for example, "DistributedJoins", since
all joins in the Calcite-based engine are distributed by default if
needed). But, perhaps, others can be useful. If you are really sure
that the "Local" property is necessary for the new SQL engine, feel
free to create a ticket and describe the reason why we need it.

пн, 5 июн. 2023 г. в 12:05, y :
>
> Hello igniters,
>  Just like the title, setLocal seems invalid for Calcite 2.15. When I set 
> ‘setLocal = true’ and query data from one node, the result sets is returned 
> from all data nodes.  This problem is not present in version 2.13 ,which not 
> use Calcite. I'd like to know is this an error? If yes it is, When will it be 
> fixed?
>
> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query);
> fieldsQuery.setLocal(true);   // uneffective for this line
> List> rs = 
> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll();
>
>


Re: Ignite thin client continuous query listener cannot listen to all events

2023-05-25 Thread Alex Plehanov
Hello,

Looks like the bug is related to server-side continuous queries
functionality. I've tried to reproduce it using thick-client and got
the same results, in rare cases events are lost.

чт, 25 мая 2023 г. в 06:48, Pavel Tupitsyn :
>
> Thank you for the bug report, I will have a look.
>
> On Thu, May 25, 2023 at 5:10 AM LonesomeRain  wrote:
>>
>> Hi, Jeremy McMillan
>>
>>
>>
>> I have created a Jira about this bug.
>>
>>
>>
>> https://issues.apache.org/jira/browse/IGNITE-19561
>>
>>
>>
>> By executing these two test codes in the description, it is easy to 
>> reproduce the problem.
>>
>>
>>
>> What should I do next? Continuously following this Jira?
>>
>>
>>
>>
>>
>> 发件人: Jeremy McMillan
>> 发送时间: 2023年5月24日 23:24
>> 收件人: user@ignite.apache.org
>> 主题: Re: Ignite thin client continuous query listener cannot listen to all 
>> events
>>
>>
>>
>> Thanks for bringing this up!
>>
>>
>>
>> https://ignite.apache.org/docs/latest/key-value-api/continuous-queries#events-delivery-guarantees
>>
>>
>>
>> This sounds like you may have found a bug, but the details you've provided 
>> are not sufficient to help others recreate and observe it for themselves, 
>> and this effort needs to be recorded in a ticket. Would you be able to sign 
>> up for a Jira account and detail steps to reproduce this behavior?
>>
>> You may also want to research this:
>>   https://issues.apache.org/jira/browse/IGNITE-8035
>>
>>
>>
>> On Mon, May 22, 2023 at 6:52 AM lonesomerain  wrote:
>>
>> Hi,
>>
>> I have a question while using ignite 2.15.0
>>
>>
>>
>> Problem scenario:
>>
>> Start the Ignite server of one node, start one thin client and create a 
>> continuous query listener, and then use 50 threads to add 500 data to the 
>> cache concurrently.
>>
>> Problem phenomenon:
>>
>> Through the information printed on the listener, it was found that the 
>> number of events listened to each time varies, possibly 496, 499 or 500...
>>
>> Test Code:
>>
>> public class StartServer {
>>
>> public static void main(String[] args) {
>>
>> Ignite ignite = Ignition.start();
>>
>> }
>>
>> }
>>
>>
>>
>> public class StartThinClient {
>>
>> public static void main(String[] args) throws InterruptedException {
>>
>> String addr = "127.0.0.1:10800";
>>
>>
>>
>> int threadNmu = 50;
>>
>>
>>
>> ClientConfiguration clientConfiguration = new ClientConfiguration();
>>
>> clientConfiguration.setAddresses(addr);
>>
>>
>>
>> IgniteClient client1 = Ignition.startClient(clientConfiguration);
>>
>>
>>
>> ClientCache cache1 = 
>> client1.getOrCreateCache("test");
>>
>>
>>
>> ContinuousQuery query = new ContinuousQuery<>();
>>
>> query.setLocalListener(new CacheEntryUpdatedListener> Object>() {
>>
>> @Override
>>
>> public void onUpdated(Iterable> 
>> cacheEntryEvents) throws CacheEntryListenerException {
>>
>> Iterator> iterator = 
>> cacheEntryEvents.iterator();
>>
>> while (iterator.hasNext()) {
>>
>> CacheEntryEvent next = iterator.next();
>>
>> System.out.println("" + next.getKey());
>>
>> }
>>
>> }
>>
>> });
>>
>>
>>
>> cache1.query(query);
>>
>>
>>
>> IgniteClient client2 = Ignition.startClient(clientConfiguration);
>>
>> ClientCache cache2 = client2.cache("test");
>>
>>
>>
>> Thread[] threads = new Thread[threadNmu];
>>
>> for (int i = 0; i < threads.length; ++i) {
>>
>> threads[i] = new Thread(new OperationInsert(cache2, i, 500, 
>> threadNmu));
>>
>> }
>>
>> for (int i = 0; i < threads.length; ++i) {
>>
>> threads[i].start();
>>
>> }
>>
>> for (Thread thread : threads) {
>>
>> thread.join();
>>
>> }
>>
>>
>>
>> Thread.sleep(6);
>>
>>
>>
>> }
>>
>>
>>
>> static class OperationInsert implements Runnable {
>>
>>
>>
>> private ClientCache cache;
>>
>> private int k;
>>
>> private Integer test_rows;
>>
>> private Integer thread_cnt;
>>
>>
>>
>> public OperationInsert(ClientCache cache, int k, 
>> Integer test_rows, Integer thread_cnt) {
>>
>> this.cache = cache;
>>
>> this.k = k;
>>
>> this.test_rows = test_rows;
>>
>> this.thread_cnt = thread_cnt;
>>
>> }
>>
>>
>>
>> @Override
>>
>> public void run() {
>>
>> for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + 
>> (test_rows/thread_cnt) * (k + 1); i++) {
>>
>> cache.put("" + i, "aaa");
>>
>> }
>>
>> }
>>
>> }
>>
>>
>>
>> }
>>
>>
>>
>> Version:
>>
>> The testing program uses Ignite version 2.15.0
>>
>> I attempted to insert data using one thread and did not observe any event 
>> loss. In addition, I also attempted an Ignite cluster with two or three 
>> nodes, which can still listen to all 500 

Re: How can I specify a column of java object in my sql select list after switching to calcite

2022-11-10 Thread Alex Plehanov
Tore Yang,

Can you please provide a full stack trace of the exception and more details
about your case? Do you create the table via DDL or via QueryEntity? Do you
have a simple reproducer?
The error looks strange, because it's not a parser responsibility to check
column types.

пт, 11 нояб. 2022 г. в 08:37, Jeremy McMillan :

> You might want to watch the recording of the summit talk on Ignite 3
> changes.
>
> There is a major change around how binary column types and objects are
> stored using Calcite.
>
> On Thu, Nov 10, 2022, 15:10 tore yang via user 
> wrote:
>
>> My cache has a column of type java map, the column name is "mymap". when
>> I send a query "select id,mymap from mytable", it complains "Error: Failed
>> to parse the query. (state=4200,code=1001). but when I run the query
>> "select * from mytable", the column mymap returns perfectly. this wasn't a
>> problem until I replaced "H2" with "Calcite", and it only fails for column
>> with data type java.lang.objects.
>>
>> How to specify the column which is of type "java object".
>>
>> Thanks!
>>
>


Re: How to enable "FUN" for Calcite in Apache Ignite

2022-11-08 Thread Alex Plehanov
Hello, there is no such function DATEDIFF in the Calcite based SQL engine
in Apache Ignite currently, and it can't be enabled by any property.
But you can use alternatives like:
(date1 - date2) MINUTES > INTERVAL 30 MINUTES
TIMESTAMPDIFF(MINUTE, date1, date2) > 30


пн, 7 нояб. 2022 г. в 19:49, tore yang via user :

> I have some built in operator in my query which can only be recognized
> when CalciteConnectionProperty.FUN is set, for instance "select * from
> mycache where DATEDIFF("Minute",date1,date2)> 30". The function "DATEDIFF"
> caused an error "No match found for function signuature "DATEDIFF()"
> How can I enable the "FUN" in Apache Ignite?
>
>
> Tao
>


Re: Calcite engine status

2022-09-30 Thread Alex Plehanov
Hello,

There are no certain plans about beta status removal. I think it will be in
beta status at least one release more (at least until 2.16)

ср, 28 сент. 2022 г. в 16:30, Mike Wiesenberg :

> Hi,
>  When(and in what version) is Calcite SQL engine integration scheduled to
> be out of experimental/beta status?
>  Has anyone here tried it out and have experiences to share?
>
> Thanks
>  MW
>


Re: Why does Thin client connect to all servers in connection string???

2022-09-05 Thread Alex Plehanov
Hello,

When "partition awareness" is enabled, the client tries to connect to all
server nodes from the connection string. If "partition awareness" is
disabled, only one connection is maintained. Since Ignite 2.11 "partition
awareness" is enabled by default. See [1] for more information.

[1]:
https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients#partition-awareness


вт, 6 сент. 2022 г. в 02:09, Gregory Sylvain :

>
>
> Hi,
>
>
>
> I am running an Ignite 2.12.0 Service Grid with 5 Server Nodes and 32 Thin
> Client nodes in a private cloud of RHEL 7.9 VMs.  (no docker images).
>
>
>
> Doing a sqlline CLI query for connected clients, I get the expected 32
> connected clients.  However, if I execute netstat (or ss) on all
> ServerNodes, I see an ESTABLISHed connection from each client to each
> ServerNode on port 10800?  These connections seem to be maintained (e.g.
> they do not timeout after 2 hours).
>
>
>
> I am using Spring XML to configure both client and server nodes.  The
> server nodes are also utilizing systemd for reliability.
>
>
>
> Any idea what is going on?
>
>
>
> Thanks in advance.
>
> Greg
>
>
>


Re: What does javax.cache.CacheException: Failed to execute map query on remote node mean?

2022-09-01 Thread Alex Plehanov
Once the table is recreated (or index rebuilded) the issue is fixed.
Upgrading from 2.12 to 2.13 (if all indexes having this issue are already
rebuilded on 2.12) should be fine.

ср, 31 авг. 2022 г. в 23:43, John Smith :

> Ok but since I dropped and recreated the table I'm fine? It won't somehow
> throw that error again? And if I upgrade to 2.13 from 2.12 will I have the
> same issue?
>
> On Wed, Aug 31, 2022 at 3:31 PM Alex Plehanov 
> wrote:
>
>> John Smith,
>>
>> Thank you. This issue will be fixed in upcoming 2.14.
>>
>> ср, 31 авг. 2022 г. в 21:50, John Smith :
>>
>>> Here it is... And yes I recently upgraded to 2.12 from 2.8.1
>>>
>>> create table if not exists car_code (
>>> provider_id int,
>>> car_id int,
>>> car_code varchar(16),
>>> primary key (provider_id, car_id)
>>> ) with "template=replicatedTpl, key_type=CarCodeKey, value_type=CarCode";
>>>
>>> On Wed, Aug 31, 2022 at 7:25 AM Alex Plehanov 
>>> wrote:
>>>
>>>> John Smith,
>>>>
>>>> Can you please show DDL for the car_code table? Does PK of this table
>>>> include provider_id or car_code columns?
>>>> I found a compatibility issue, with the same behaviour, it happens when
>>>> storage created with Ignite version before 2.11 is used with the newer
>>>> Ignite version. Have you upgraded the dev environment with existing storage
>>>> recently (before starting to get this error)?
>>>>
>>>>
>>>> чт, 4 авг. 2022 г. в 17:06, John Smith :
>>>>
>>>>> Let me know if that makes any sense, because the test data is the same
>>>>> and the application code is the same. Only dropped and created the table
>>>>> again using DbEaver.
>>>>>
>>>>> On Wed, Aug 3, 2022 at 11:39 AM John Smith 
>>>>> wrote:
>>>>>
>>>>>> Hi, so I dropped the table and simply recreated it. Did NOT restart
>>>>>> the application.
>>>>>>
>>>>>> Now it works fine.
>>>>>>
>>>>>> On Wed, Aug 3, 2022 at 9:58 AM John Smith 
>>>>>> wrote:
>>>>>>
>>>>>>> How? The code is 100% the same between production and dev. And it's
>>>>>>> part of a bigger application.
>>>>>>>
>>>>>>> Only dev has the issue. I will drop and recreate the table if that
>>>>>>> fixes the issue then what?
>>>>>>>
>>>>>>> You are saying mismatch, it's a string period.
>>>>>>>
>>>>>>> "select car_id from car_code where provider_id = ? and car_code = ? 
>>>>>>> order by car_id asc limit 1;"
>>>>>>>
>>>>>>>
>>>>>>> The first parameter is Integer and the second one is String. there's
>>>>>>> no way this can mismatch... And even if the String was a UUID it's 
>>>>>>> still a
>>>>>>> string.
>>>>>>>
>>>>>>> public JsonArray query(final String sql, final long timeoutMs,
>>>>>>> final Object... args) {
>>>>>>> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(args);
>>>>>>> query.setTimeout((int) timeoutMs, TimeUnit.MILLISECONDS);
>>>>>>>
>>>>>>> try (QueryCursor> cursor = cache.query(query)) {
>>>>>>> List rows = new ArrayList<>();
>>>>>>> Iterator> iterator = cursor.iterator();
>>>>>>>
>>>>>>> while(iterator.hasNext()) {
>>>>>>> List currentRow = iterator.next();
>>>>>>> JsonArray row = new JsonArray();
>>>>>>>
>>>>>>> currentRow.forEach(o -> row.add(o));
>>>>>>>
>>>>>>> rows.add(row);
>>>>>>> }
>>>>>>>
>>>>>>> promise.tryComplete(rows);
>>>>>>> } catch(Exception ex) {
>>>>>>> ex.printStackTrace();
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> Integer providerId = 1;
>>>>>>> St

Re: What does javax.cache.CacheException: Failed to execute map query on remote node mean?

2022-08-31 Thread Alex Plehanov
John Smith,

Thank you. This issue will be fixed in upcoming 2.14.

ср, 31 авг. 2022 г. в 21:50, John Smith :

> Here it is... And yes I recently upgraded to 2.12 from 2.8.1
>
> create table if not exists car_code (
> provider_id int,
> car_id int,
> car_code varchar(16),
> primary key (provider_id, car_id)
> ) with "template=replicatedTpl, key_type=CarCodeKey, value_type=CarCode";
>
> On Wed, Aug 31, 2022 at 7:25 AM Alex Plehanov 
> wrote:
>
>> John Smith,
>>
>> Can you please show DDL for the car_code table? Does PK of this table
>> include provider_id or car_code columns?
>> I found a compatibility issue, with the same behaviour, it happens when
>> storage created with Ignite version before 2.11 is used with the newer
>> Ignite version. Have you upgraded the dev environment with existing storage
>> recently (before starting to get this error)?
>>
>>
>> чт, 4 авг. 2022 г. в 17:06, John Smith :
>>
>>> Let me know if that makes any sense, because the test data is the same
>>> and the application code is the same. Only dropped and created the table
>>> again using DbEaver.
>>>
>>> On Wed, Aug 3, 2022 at 11:39 AM John Smith 
>>> wrote:
>>>
>>>> Hi, so I dropped the table and simply recreated it. Did NOT restart the
>>>> application.
>>>>
>>>> Now it works fine.
>>>>
>>>> On Wed, Aug 3, 2022 at 9:58 AM John Smith 
>>>> wrote:
>>>>
>>>>> How? The code is 100% the same between production and dev. And it's
>>>>> part of a bigger application.
>>>>>
>>>>> Only dev has the issue. I will drop and recreate the table if that
>>>>> fixes the issue then what?
>>>>>
>>>>> You are saying mismatch, it's a string period.
>>>>>
>>>>> "select car_id from car_code where provider_id = ? and car_code = ? order 
>>>>> by car_id asc limit 1;"
>>>>>
>>>>>
>>>>> The first parameter is Integer and the second one is String. there's
>>>>> no way this can mismatch... And even if the String was a UUID it's still a
>>>>> string.
>>>>>
>>>>> public JsonArray query(final String sql, final long timeoutMs,
>>>>> final Object... args) {
>>>>> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(args);
>>>>> query.setTimeout((int) timeoutMs, TimeUnit.MILLISECONDS);
>>>>>
>>>>> try (QueryCursor> cursor = cache.query(query)) {
>>>>> List rows = new ArrayList<>();
>>>>> Iterator> iterator = cursor.iterator();
>>>>>
>>>>> while(iterator.hasNext()) {
>>>>> List currentRow = iterator.next();
>>>>> JsonArray row = new JsonArray();
>>>>>
>>>>> currentRow.forEach(o -> row.add(o));
>>>>>
>>>>> rows.add(row);
>>>>> }
>>>>>
>>>>> promise.tryComplete(rows);
>>>>> } catch(Exception ex) {
>>>>> ex.printStackTrace();
>>>>> }
>>>>> }
>>>>>
>>>>> Integer providerId = 1;
>>>>> String carCode = "FOO";
>>>>>
>>>>> query("select car_id from car_code where provider_id = ? and
>>>>> car_code = ? order by car_id asc limit 1;", 3000, providerId, cardCode);
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Aug 3, 2022 at 6:50 AM Taras Ledkov 
>>>>> wrote:
>>>>>
>>>>>> Hi John and Don,
>>>>>>
>>>>>> I guess the root cause in the data types mismatch between table
>>>>>> schema and actual data at the store or type of the query parameter.
>>>>>> To explore the gap, it would be very handy if you could provide a
>>>>>> small reproducer (standalone project or PR somewhere).
>>>>>>
>>>>>> > In my case I'm not even using UUID fields. Also the same code 2
>>>>>> diff environment dev vs prod doesn't cause the issue. I'm lucky enough 
>>>>>> that
>>>>>> it's on dev and prod is ok.
>>>>>> >
>>>>>> > But that last part might be misleading because in prod I think it
>>>>>> happened early on during upgrade and all I did was recreate the sql 
>>>>>> table.
>>>>>> >
>>>>>> > So before I do the same on dev... I want to see what the issue is.
>>>>>> >
>>>>>> > On Tue., Aug. 2, 2022, 6:06 p.m. ,  wrote:
>>>>>> >
>>>>>> >> I‘m only speculating but this looks very similar to the issue I
>>>>>> had last week and reported to the group here.
>>>>>> >>
>>>>>> >> Caused by: org.h2.message.DbException: Hexadecimal string with odd
>>>>>> number of characters: "5" [90003-197]
>>>>>>
>>>>>>
>>>>>> --
>>>>>> With best regards,
>>>>>> Taras Ledkov
>>>>>>
>>>>>


Re: Page replacement priority

2022-08-31 Thread Alex Plehanov
Hello,

Data pages have the same priority as index pages. Algorithm can be
configured by DataRegionConfiguration.PageReplacementMode property.
See
https://ignite.apache.org/docs/latest/memory-configuration/replacement-policies
for more information.

ср, 31 авг. 2022 г. в 04:18, 38797715 <38797...@qq.com>:

> hello,
>
> When there is insufficient memory, if a page replacement occurs, will the
> data be swapped out of memory first, or will the index also be swapped out
> of memory?
>
> Or are there any optimized algorithms?
>


Re: What does javax.cache.CacheException: Failed to execute map query on remote node mean?

2022-08-31 Thread Alex Plehanov
John Smith,

Can you please show DDL for the car_code table? Does PK of this table
include provider_id or car_code columns?
I found a compatibility issue, with the same behaviour, it happens when
storage created with Ignite version before 2.11 is used with the newer
Ignite version. Have you upgraded the dev environment with existing storage
recently (before starting to get this error)?


чт, 4 авг. 2022 г. в 17:06, John Smith :

> Let me know if that makes any sense, because the test data is the same and
> the application code is the same. Only dropped and created the table again
> using DbEaver.
>
> On Wed, Aug 3, 2022 at 11:39 AM John Smith  wrote:
>
>> Hi, so I dropped the table and simply recreated it. Did NOT restart the
>> application.
>>
>> Now it works fine.
>>
>> On Wed, Aug 3, 2022 at 9:58 AM John Smith  wrote:
>>
>>> How? The code is 100% the same between production and dev. And it's part
>>> of a bigger application.
>>>
>>> Only dev has the issue. I will drop and recreate the table if that fixes
>>> the issue then what?
>>>
>>> You are saying mismatch, it's a string period.
>>>
>>> "select car_id from car_code where provider_id = ? and car_code = ? order 
>>> by car_id asc limit 1;"
>>>
>>>
>>> The first parameter is Integer and the second one is String. there's no
>>> way this can mismatch... And even if the String was a UUID it's still a
>>> string.
>>>
>>> public JsonArray query(final String sql, final long timeoutMs, final
>>> Object... args) {
>>> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(args);
>>> query.setTimeout((int) timeoutMs, TimeUnit.MILLISECONDS);
>>>
>>> try (QueryCursor> cursor = cache.query(query)) {
>>> List rows = new ArrayList<>();
>>> Iterator> iterator = cursor.iterator();
>>>
>>> while(iterator.hasNext()) {
>>> List currentRow = iterator.next();
>>> JsonArray row = new JsonArray();
>>>
>>> currentRow.forEach(o -> row.add(o));
>>>
>>> rows.add(row);
>>> }
>>>
>>> promise.tryComplete(rows);
>>> } catch(Exception ex) {
>>> ex.printStackTrace();
>>> }
>>> }
>>>
>>> Integer providerId = 1;
>>> String carCode = "FOO";
>>>
>>> query("select car_id from car_code where provider_id = ? and
>>> car_code = ? order by car_id asc limit 1;", 3000, providerId, cardCode);
>>>
>>>
>>>
>>> On Wed, Aug 3, 2022 at 6:50 AM Taras Ledkov  wrote:
>>>
 Hi John and Don,

 I guess the root cause in the data types mismatch between table schema
 and actual data at the store or type of the query parameter.
 To explore the gap, it would be very handy if you could provide a small
 reproducer (standalone project or PR somewhere).

 > In my case I'm not even using UUID fields. Also the same code 2 diff
 environment dev vs prod doesn't cause the issue. I'm lucky enough that it's
 on dev and prod is ok.
 >
 > But that last part might be misleading because in prod I think it
 happened early on during upgrade and all I did was recreate the sql table.
 >
 > So before I do the same on dev... I want to see what the issue is.
 >
 > On Tue., Aug. 2, 2022, 6:06 p.m. ,  wrote:
 >
 >> I‘m only speculating but this looks very similar to the issue I had
 last week and reported to the group here.
 >>
 >> Caused by: org.h2.message.DbException: Hexadecimal string with odd
 number of characters: "5" [90003-197]


 --
 With best regards,
 Taras Ledkov

>>>


Re: version of h2 Database Engine.

2022-02-06 Thread Alex Plehanov
Hello,

Unfortunately, update to the newest H2 version is not possible, since H2
removed the Ignite support [1].
Currently, the new SQL engine for Ignite is under development. The first
beta release of this engine is planned to Apache Ignite 2.13 version [2].
But it still requires the ignite-indexing module, which requires H2. This
problem will be settled in one of the next versions after 2.13.

[1]: https://github.com/h2database/h2database/pull/2227
[2]: https://lists.apache.org/thread/yck17qhcgg2qmzp374q69xvlhb9ocwhh


ср, 2 февр. 2022 г. в 19:13, tore yang :

> Hi,
>
> Today the library h2 1.4.197 was banned by my firm for security issue, as
> a result my ignite applications (running on version 2.12.0) can not be
> released,  When I upgrade to version like 2.1.210, the clients failed to
> connect to the cluster, complaining no class definition for
> org.h2.index.BaseIndex.
> I guess Apache Ignite doesn't support h2 2.1.210 as of now? if so when
> it's going to support?
>
>
> Regards,
>
> Tao
>
>
>


Re: BinaryObjectException

2021-11-22 Thread Alex Plehanov
Thin client before version 2.9 throws an exception when there is more
than one scheme with the same type name and compact footers are enabled.
The scheme is a set of field names of the object, so, using only one scheme
per type (if it's possible in your case) will solve the problem.
If you don't want to update Ignite server, you can update only thin client,
it will also solve the problem.

пн, 22 нояб. 2021 г. в 12:57, Naveen Kumar :

> Thanks Alex..
> Here it says it throws this exception when ignite does not find this record
> In our case, data exists and it get resolved if we do a node restart
>
> And, any work-around we have to fix this  issue with current version
> ignite 2.8.1
>
> Thanks
>
> On Mon, Nov 22, 2021 at 12:20 PM Alex Plehanov 
> wrote:
>
>> Hello!
>>
>> Most probably it's related to ticket [1] that is fixed in 2.9 release.
>>
>> [1]: https://issues.apache.org/jira/browse/IGNITE-13192
>>
>> пн, 22 нояб. 2021 г. в 03:11, Naveen Kumar :
>>
>>> Hi All
>>>
>>> We are using 2.8.1
>>> At times, we do get this BinaryObjectException while calling GETs thru
>>> thin client, we dont see any errors or exceptions on node logs, this only
>>> be seen on the cline tside.
>>> what could be the potential reason for this
>>>
>>> Attached the exact error message
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> --
>>> Thanks & Regards,
>>> Naveen Bandaru
>>>
>>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


Re: BinaryObjectException

2021-11-21 Thread Alex Plehanov
Hello!

Most probably it's related to ticket [1] that is fixed in 2.9 release.

[1]: https://issues.apache.org/jira/browse/IGNITE-13192

пн, 22 нояб. 2021 г. в 03:11, Naveen Kumar :

> Hi All
>
> We are using 2.8.1
> At times, we do get this BinaryObjectException while calling GETs thru
> thin client, we dont see any errors or exceptions on node logs, this only
> be seen on the cline tside.
> what could be the potential reason for this
>
> Attached the exact error message
>
>
>
>
> Thanks
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


Re: Problem with ScanQuery with filter on ClinetCache

2021-10-18 Thread Alex Plehanov
You should deploy not only the Person class, but also the filter class. You
can, for example, make a jar with required classes and put it to the server
classpath.

пн, 18 окт. 2021 г. в 23:53, Prasad Kommoju :

> Hi Alex,
>
>
>
> Thanks for the clarification. I am running it on a loptop, using the
> default configuration file so there is only one node and the client code
> and the server both run on the same laptop. Could you suggest how to deploy
> the Person class to other node (in production there will be) and in the
> laptop case?
>
>
>
>
>
> -
>
> Regards,
>
> Prasad Kommoju
>
>
>
> *From:* Alex Plehanov 
> *Sent:* Monday, October 18, 2021 1:35 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Problem with ScanQuery with filter on ClinetCache
>
>
>
> Hello,
>
>
>
> Thin client doesn't have a peer class loader (and most probably will never
> have). To use predicates you should deploy classes for these predicates to
> Ignite nodes. Serializable interface for Person class will not help here.
>
>
>
> пн, 18 окт. 2021 г. в 20:14, Prasad Kommoju :
>
> Thanks for the tip but it did not help. Also, there was another suggestion
> that Person should implement Serializable. The examples do not show this
> requirement, I am going to try this too.
>
>
>
>
>
>
>
> -
>
> Regards,
>
> Prasad Kommoju
>
>
>
> *From:* Ilya Kazakov 
> *Sent:* Sunday, October 17, 2021 8:41 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Problem with ScanQuery with filter on ClinetCache
>
>
>
> Hello, it looks like you should enable Peer Class Loading:
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.apache.org%2Fdocs%2Flatest%2Fcode-deployment%2Fpeer-class-loading=04%7C01%7Cpkommoju%40futurewei.com%7C00a4537d258d40ec685c08d99276cf0c%7C0fee8ff2a3b240189c753a1d5591fedc%7C1%7C0%7C637701861258786864%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000=GMwFxFPtXAdpwviyWo%2FhpBC1oQODTMk8E0eFR0DxK9o%3D=0>
>
>
>
> --
>
> Ilya Kazakov
>
>
>
> сб, 16 окт. 2021 г. в 02:47, Prasad Kommoju :
>
>
>
> Hi
>
>
>
> I am having trouble getting ScanQuery to work with a filter from thin
> client.
>
>
>
> I have attached a working minimal reproduction program (IntelliJ project
> zip export) , the ignite config file and the  part of the ignite logfile
> after running the query. The code is fashioned along the example in the
> documentation.
>
>
>
> Here is the querying part of the code:
>
>
>
> …
>
> IgniteBiPredicate filter =
> (key, p) -> p.getName().equals(srchName);
>
> try (QueryCursor> scnCursor =
>  personCache.query(new ScanQuery<>(filter))) {
> scnCursor.forEach(
> entry -> System.*out*.println(entry.getKey() +
> " " + entry.getValue()));
> } catch (Exception e) {
> System.*out*.println("Scan query failed " + e.getMessage());
> }
>
>
>
> …
>
>
>
> Any help in pointing out the problem will be greatly appreciated.
>
>
>
>
>
> -
>
> Regards,
>
> Prasad Kommoju
>
>


Re: Problem with ScanQuery with filter on ClinetCache

2021-10-18 Thread Alex Plehanov
Hello,

Thin client doesn't have a peer class loader (and most probably will never
have). To use predicates you should deploy classes for these predicates to
Ignite nodes. Serializable interface for Person class will not help here.

пн, 18 окт. 2021 г. в 20:14, Prasad Kommoju :

> Thanks for the tip but it did not help. Also, there was another suggestion
> that Person should implement Serializable. The examples do not show this
> requirement, I am going to try this too.
>
>
>
>
>
>
>
> -
>
> Regards,
>
> Prasad Kommoju
>
>
>
> *From:* Ilya Kazakov 
> *Sent:* Sunday, October 17, 2021 8:41 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Problem with ScanQuery with filter on ClinetCache
>
>
>
> Hello, it looks like you should enable Peer Class Loading:
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading
> 
>
>
>
> --
>
> Ilya Kazakov
>
>
>
> сб, 16 окт. 2021 г. в 02:47, Prasad Kommoju :
>
>
>
> Hi
>
>
>
> I am having trouble getting ScanQuery to work with a filter from thin
> client.
>
>
>
> I have attached a working minimal reproduction program (IntelliJ project
> zip export) , the ignite config file and the  part of the ignite logfile
> after running the query. The code is fashioned along the example in the
> documentation.
>
>
>
> Here is the querying part of the code:
>
>
>
> …
>
> IgniteBiPredicate filter =
> (key, p) -> p.getName().equals(srchName);
>
> try (QueryCursor> scnCursor =
>  personCache.query(new ScanQuery<>(filter))) {
> scnCursor.forEach(
> entry -> System.*out*.println(entry.getKey() +
> " " + entry.getValue()));
> } catch (Exception e) {
> System.*out*.println("Scan query failed " + e.getMessage());
> }
>
>
>
> …
>
>
>
> Any help in pointing out the problem will be greatly appreciated.
>
>
>
>
>
> -
>
> Regards,
>
> Prasad Kommoju
>
>


Re: Problem with Cache KV Remote Query

2021-10-18 Thread Alex Plehanov
Hello,

How was the entry inserted to the cache? You are trying to get this entry
via thin client, if the entry was inserted via thick-client (Ignite node)
you can face such a problem. Ignite thin-client and Ignite nodes have
different default "Compact footer" property values, so POJO keys are
marshalled in different ways and treated as different keys for thin-clients
and Ignite nodes.
Try to change "Compact footer" property for thin-client configuration:
ClientConfiguration cfg = new
ClientConfiguration().setBinaryConfiguration(new
BinaryConfiguration().setCompactFooter(true)).setAddresses("127.0.0.1:10800
");


пн, 18 окт. 2021 г. в 15:02, MJ <6733...@qq.com>:

> Hi,
>
>
> I experienced below problem when testing the “Affinity Colocation”
> functionality. The code is from
> https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation#configuring-affinity-key
> .
>
> When I run the code in single jvm, it works perfect and successfully
> retrieved the cached object (personCache.get(new PersonKey(1, "company1")))
> .
> But when I try to run the client code in another new JVM(meanwhile leave
> the server node run in local), something goes wrong (see below).  Please
> can anyone elaborate why the first test case succeeded but the second one
> failed ?
>
> Logger log = LoggerFactory.getLogger(getClass());
>
> //success
> @Test
> public void test_iterate() throws ClientException,
> Exception {
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("127.0.0.1:10800");
> try (IgniteClient client =
> Ignition.startClient(cfg)) {
> ClientCache Person> cache = client.cache("persons");
> try
> (QueryCursor> qryCursor = cache.query(new
> ScanQuery<>(null))) {
>
> qryCursor.forEach(entry -> System.out.println("Key = " + entry.getKey() +
> ", Value = " + entry.getValue()));
> }
> }
> }
>
>
> //fail
> @Test
> public void test_query() throws ClientException, Exception
> {
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("127.0.0.1:10800");
> try (IgniteClient client =
> Ignition.startClient(cfg)) {
> ClientCache Person> cache = client.cache("persons");
> Person row = cache.get(new
> PersonKey(1, "company1"));
>
> Assert.assertNotNull(row);  // no data returned
> log.info("{}", row);
> }
> }
>
>
> Thanks,
> -MJ
>
>


Re: [EXT] Re: Crash of Ignite (B+Tree corrupted) on a large PutIfAbsent

2021-10-18 Thread Alex Plehanov
> Will you add a test in Ignite to avoid the crash
I'm not quite sure, but perhaps a crash is the best we can do in this case.
Throwing exception to the user might be not enough, since some changes can
be already made in the DB prior to the exception and this can lead to data
inconsistency.

> give a clearer error message
Nested exception already contains a root cause of the problem (For example:
"Record is too long [capacity=67108864, size=92984514]", see ticket with
the same problem reproduced with java [1]), but perhaps this nested
exception is not correctly displayed by Ignite .NET

[1]: https://issues.apache.org/jira/browse/IGNITE-13965

сб, 16 окт. 2021 г. в 15:02, Semeria, Vincent :

> Yes indeed, setting the WAL segment size to 150 megabytes accepts my
> PutIfAbsent. Thanks.
>
>
>
> Will you add a test in Ignite to avoid the crash, give a clearer error
> message and mention in the documentation that the WAL segment size should
> be higher than any single cache entry ? At the moment the doc just says this
>
> //
>
> // Summary:
>
> // Gets or sets the size of the WAL (Write Ahead Log) segment.
> For performance reasons,
>
> // the whole WAL is split into files of fixed length called
> segments.
>
>
>
> The limit should also be written in this page
>
> Ignite Persistence | Ignite Documentation (apache.org)
> <https://ignite.apache.org/docs/latest/persistence/native-persistence>
>
>
>
> *From:* Alex Plehanov 
> *Sent:* Friday, October 15, 2021 16:12
> *To:* user@ignite.apache.org
> *Subject:* [EXT] Re: Crash of Ignite (B+Tree corrupted) on a large
> PutIfAbsent
>
>
>
> Hello,
>
>
>
> Perhaps you have too small WAL segment size (WAL segment should be large
> enough to fit the whole cache entry), try to
> change DataStorageConfiguration.WalSegmentSize property.
>
>
>
> чт, 14 окт. 2021 г. в 00:43, Semeria, Vincent <
> vincent.seme...@finastra.com>:
>
> Hello Ignite,
>
>
>
> I currently use the C# API of Ignite 2.10 to store large objects of type V
> in an ICache. Typically an object of V is around 100 megabytes.
> My data region is persisted on the hard drive. PutIfAbsent crashes Ignite
> with the complicated message below. As a workaround, I split type V into
> smaller types and used loops of smaller PutIfAbsent, which succeeded.
> Ultimately the data stored in the cache is the same, which shows that
> Ignite accepts my data (this is not a problem in the binary serializer).
>
>
>
> Is there a configuration of the data region that would accept a single
> PutIfAbsent of 100 megabytes ?
>
>
>
> Anyway Ignite should probably not crash when this limit is exceeded.
> Please send a clean error instead like “Insertion request exceeds limit
> XYZ” and keep Ignite alive in this case.
>
>
>
> Regards,
>
> Vincent Semeria
>
>
>
>
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.215] :
> Apache.Ignite.NLog.IgniteNLogLogger::LoggerLog() : Critical system error
> detected. Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
> corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=241659666,
> val2=1127239936638982]], msg=Runtime failure on search row: SearchRow
> [key=KeyCacheObjectImpl [part=312,
> val=56ae72d3-a91a-4211-8279-0b0447881544, hasValBytes=true],
> hash=746501958, cacheId=0
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.216] :
> org.apache.ignite.internal.processors.failure.FailureProcessor::LoggerLog()
> : A critical problem with persistence data structures was detected. Please
> make backup of persistence storage and WAL files for further analysis.
> Persistence storage path:  WAL path: db/wal WAL archive path: db/wal/archive
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.219] :
> org.apache.ignite.internal.processors.failure.FailureProcessor::LoggerLog()
> : No deadlocked threads detected.
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.276] :
> org.apach

Re: Crash of Ignite (B+Tree corrupted) on a large PutIfAbsent

2021-10-15 Thread Alex Plehanov
Hello,

Perhaps you have too small WAL segment size (WAL segment should be large
enough to fit the whole cache entry), try to
change DataStorageConfiguration.WalSegmentSize property.

чт, 14 окт. 2021 г. в 00:43, Semeria, Vincent :

> Hello Ignite,
>
>
>
> I currently use the C# API of Ignite 2.10 to store large objects of type V
> in an ICache. Typically an object of V is around 100 megabytes.
> My data region is persisted on the hard drive. PutIfAbsent crashes Ignite
> with the complicated message below. As a workaround, I split type V into
> smaller types and used loops of smaller PutIfAbsent, which succeeded.
> Ultimately the data stored in the cache is the same, which shows that
> Ignite accepts my data (this is not a problem in the binary serializer).
>
>
>
> Is there a configuration of the data region that would accept a single
> PutIfAbsent of 100 megabytes ?
>
>
>
> Anyway Ignite should probably not crash when this limit is exceeded.
> Please send a clean error instead like “Insertion request exceeds limit
> XYZ” and keep Ignite alive in this case.
>
>
>
> Regards,
>
> Vincent Semeria
>
>
>
>
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.215] :
> Apache.Ignite.NLog.IgniteNLogLogger::LoggerLog() : Critical system error
> detected. Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
> corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=241659666,
> val2=1127239936638982]], msg=Runtime failure on search row: SearchRow
> [key=KeyCacheObjectImpl [part=312,
> val=56ae72d3-a91a-4211-8279-0b0447881544, hasValBytes=true],
> hash=746501958, cacheId=0
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.216] :
> org.apache.ignite.internal.processors.failure.FailureProcessor::LoggerLog()
> : A critical problem with persistence data structures was detected. Please
> make backup of persistence storage and WAL files for further analysis.
> Persistence storage path:  WAL path: db/wal WAL archive path: db/wal/archive
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.219] :
> org.apache.ignite.internal.processors.failure.FailureProcessor::LoggerLog()
> : No deadlocked threads detected.
>
> error :
> PerformanceAttributionServer[PerformanceAttributionServer1]_Global@PARDH7JQHS2
> (41660.6010) : [2021/10/12-10:21:48.276] :
> org.apache.ignite.internal.processors.failure.FailureProcessor::LoggerLog()
> : Thread dump at 2021/10/12 10:21:48 CEST
>
> Thread [name="sys-#200", id=235, state=TIMED_WAITING, blockCnt=0,
> waitCnt=1]
>
> Lock
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@529e4706,
> ownerName=null, ownerId=-1]
>
> at sun.misc.Unsafe.park(Native Method)
>
> at
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>
> at
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>
> at
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> Thread [name="sys-#199", id=234, state=TIMED_WAITING, blockCnt=0,
> waitCnt=1]
>
> Lock
> [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@529e4706,
> ownerName=null, ownerId=-1]
>
> at sun.misc.Unsafe.park(Native Method)
>
> at
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>
> at
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
>
> at
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> Thread [name="sys-#198", id=233, state=TIMED_WAITING, blockCnt=0,
> waitCnt=1]
>
> Lock
> 

Re: BinaryObjectException: Unsupported protocol version

2021-08-19 Thread Alex Plehanov
Hello,

Most probably it's related to [1], which is fixed since Ignite 2.9.1.

[1]: https://issues.apache.org/jira/browse/IGNITE-13401

чт, 19 авг. 2021 г. в 09:53, Naveen Kumar :

> Hi All
>
> We are using Ignite 2.8.1 and using the thin clients majorly.
> Facing a strange issues for last couple of days, all PUTs are working
> fine, but GETs are failing with a reason : BinaryObjectException:
> Unsupported protocol version.
>
> After the node restart, GETs started working fine, and dont see anything
> specific in the node logs as well.
>
> Any pointers towards this, how did the node restart resolved the issue.
>
> None of the GETs were working earlier only PUTs were working (PUTs were
> done thru JDBC SQL), GERs are using the Java KV API.
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


Re: Best practices on how to approach data centre aware affinity

2021-08-05 Thread Alex Plehanov
Hello,

You can create your own cache templates with the affinity function you
require (currently you use a predefined "partitioned" template, which only
sets cache mode to "PARTITIONED"). See [1] for more information about cache
templates.

> Is this the right approach
> How do we handle existing data, changing the affinity function will cause
Ignite to not be able to find existing data right?
You can't change cache configuration after cache creation. In your example
these changes will be just ignored. The only way to change cache
configuration - is to create the new cache and migrate data.

> How would you recommend implementing the affinity function to be aware of
the data centre?
It's better to use the standard affinity function with a backup filter for
such cases. There is one shipped with Ignite (see [2]).

[1]:
https://ignite.apache.org/docs/latest/configuring-caches/configuration-overview#cache-templates
[2]:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html

чт, 5 авг. 2021 г. в 09:40, Courtney Robinson :

> Hi all,
> Our growth with Ignite continues and as we enter the next phase, we need
> to support multi-cluster deployments for our platform.
> We deploy Ignite and the rest of our stack in Kubernetes and we're in the
> early stages of designing what a multi-region deployment should look like.
> We are 90% SQL based when using Ignite, the other 10% includes Ignite
> messaging, Queues and compute.
>
> In our case we have thousands of tables
>
> CREATE TABLE IF NOT EXISTS Person (
>   id int,
>   city_id int,
>   name varchar,
>   company_id varchar,
>   PRIMARY KEY (id, city_id)) WITH "template=...";
>
> In our case, most tables use a template that looks like this:
>
>
> partitioned,backups=2,data_region=hypi,cache_group=hypi,write_synchronization_mode=primary_sync,affinity_key=instance_id,atomicity=ATOMIC,cache_name=Person,key_type=PersonKey,value_type=PersonValue
>
> I'm aware of affinity co-location (
> https://ignite.apache.org/docs/latest/data-modeling/affinity-collocation)
> and in the past when we used the key value APIs more than SQL we also used
> custom affinity a function to control placement.
>
> What I don't know is how to best do this with SQL defined caches.
> We will have at least 3 Kubernetes clusters, each in a different data
> centre, let's say EU_WEST, EU_EAST, CAN0
>
> Previously we provided environment variables that our custom affinity
> function would use and we're thinking of providing the data centre name
> this way.
>
> We have 2 backups in all cases + the primary and so we want the primary in
> one DC and each backup to be in a different DC.
>
> There is no syntax in the SQL template that we could find to enables
> specifying a custom affinity function.
> Our instance_id column currently used has no common prefix or anything to
> associate with a DC.
>
> We're thinking of getting the cache for each table and then setting the
> affinity function to replace the default RendevousAffinityFunction the way
> we did before we switched to SQL.
> Something like this:
>
> repo.ctx.ignite.cache("Person").getConfiguration(org.apache.ignite.configuration.CacheConfiguration)
> .setAffinity(new org.apache.ignite.cache.affinity.AffinityFunction() {
> ...
> })
>
>
> There are a few things unclear about this:
>
>1. Is this the right approach?
>2. How do we handle existing data, changing the affinity function will
>cause Ignite to not be able to find existing data right?
>3. How would you recommend implementing the affinity function to be
>aware of the data centre?
>4. Are there any other caveats we need to be thinking about?
>
> There is a lot of existing data, we want to try to avoid a full copy/move
> to new tables if possible, that will prove to be very difficult in
> production.
>
> Regards,
> Courtney Robinson
> Founder and CEO, Hypi
> Tel: ++44 208 123 2413 (GMT+0) 
>
> 
> https://hypi.io
>


Re: select * throws cannot find schema for object with compact footer

2021-05-24 Thread Alex Plehanov
Hello,

Probably related to [1] (Fixed in Ignite 2.9)

[1]: https://issues.apache.org/jira/browse/IGNITE-13192

пн, 24 мая 2021 г. в 18:55, Naveen :

> When I tried to retrieve the fields of the binary object it does throw
> exception, after deleting all such records, it started working fine. We do
> have these records earlier also and worked fine, never had any issues, not
> sure what has caused this issue all of a sudden
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [2.9.1] Custom failure handler is not being invoked

2021-04-22 Thread Alex Plehanov
Hello,

There is no way to configure a custom failure handler for segmentation
error, you can only choose from 3 available variants bounded to
segmentation policy (Use IgniteConfiguration.setSegmentationPolicy to
change policy)

чт, 22 апр. 2021 г. в 17:10, ashishg :

> Hi,
>
> I have set the custom failure handler using the IgniteConfiguration:
>
> IgniteConfiguration cfg = new IgniteConfiguration();
> GeDistributedCacheFailureHandler failureHandler = new
> GeDistributedCacheFailureHandler();
> failureHandler.setIgnoredFailureTypes(Collections.EMPTY_SET);
> cfg.setFailureHandler(failureHandler);
>
>
> But still this is being ignored and *StopNodeFailureHandler* is being
> invoked. The failureContext indicates that its due to Segmentation error.
> Could you let me know how can I invoke custom failure handler in this
> scenario?
>
> Application logs of the error:
>
>
>
>
>
> Thanks,
> Ashish
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting "Failed to deserialize object" exception while trying print value from Cache.Entry oblect

2021-03-03 Thread Alex Plehanov
Hello,

To make it work you should put classes with filters you want to use
somewhere to the server's classpath.

вт, 2 мар. 2021 г. в 12:38, ChandanS :

> Thank you Alex Plehanov for the quick response. Could you please provide
> some
> details to make it work.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Getting "Failed to deserialize object" exception while trying print value from Cache.Entry oblect

2021-03-01 Thread Alex Plehanov
Hello,

You can use as query filter only classes that already deployed to the
server. In your case, class ConnectAndExecuteTestDataInJava only exists on
the client-side and the server knows nothing about it. Unlike Ignite nodes,
Ignite thin clients don't have a P2P class deployment feature.

пн, 1 мар. 2021 г. в 21:05, ChandanS :

> Hi,
>
> I am working on implementing ignite in AWS as EC2 cluster and I am
> following
> documentation from
> "
> https://www.gridgain.com/docs/latest/installation-guide/aws/manual-install-on-ec2
> "
> for my POC. I am using three EC2 instances to form a cluster with
> gridgain-community-8.8.1 package ("./gridgain-community-8.8.1/bin/ignite.sh
> aws-static-ip-finder.xml"). I am able to start ignite as cluster, load data
> in ignite cache and print the cache entries. Below is my code snippet:
>
> public class ConnectAndExecuteTestDataInJava {
>public static void main(String args[]) throws Exception {
> ConnectAndExecuteTestDataInJava igniteTestObj = new
> ConnectAndExecuteTestDataInJava();
> igniteTestObj .connectToIgniteClusterAndExecuteData();  //working
> fine
> igniteTestObj .printIgniteCache();  //working fine
> igniteTestObj .queryIgniteCache();  //not working
> }
>
>private void connectToIgniteClusterAndExecuteData() {
> try {
> System.out.println("Starting the client Program");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> IgniteClient client = Ignition.startClient(cfg);
> System.out.println("Connection successfull");
>
> ClientCache cache =
> client.getOrCreateCache("myCache");
>
> System.out.println("Caching data");
> for (int i = 0; i <= 100; ++i) {
> cache.put(i, "String -" + i);
> }
>
> System.out.println("Cache created with 100 sample data");
> client.close();
> } catch (Exception excp) {
> excp.printStackTrace();
> }
> }
>
> private void printIgniteCache() {
> System.out.println("Creating client connection");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> System.out.println("Connection successful");
> try (IgniteClient client = Ignition.startClient(cfg)) {
> System.out.println("Getting data from the cache");
> ClientCache cache = client.cache("myCache");
> System.out.println("Data retrieving");
>
> // Get data from the cache
> for(int i = 0; i < cache.size(); ++i){
> System.out.println(cache.get(i));  //printing the cache
> like
> "String -11"
> }
> System.out.println("Data retrieved");
> } catch (Exception e) {
> System.out.println("Error connecting to client, program will
> exit");
> e.printStackTrace();
> }
> }
>
> private void queryIgniteCache() {
> System.out.println("Creating client connection to query");
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("XX.XXX.XX.XXX:10800");
> System.out.println("Connection successful");
> try (IgniteClient client = Ignition.startClient(cfg)) {
> System.out.println("Init cache");
> ClientCache cache =
> client.getOrCreateCache("myCache");
>
> System.out.println("Creating filter 1");
> IgniteBiPredicate filter1 = (key, p) ->
> key.equals(new Integer(31));
> System.out.println("Applying filter 1");
> QueryCursor> qryCursor1 =
> cache.query(new ScanQuery<>(filter1));
> System.out.println("Printing filter 1");
> qryCursor1.forEach(
> entry -> System.out.println("Key1 = " + entry.getKey()
> +
> ", Value1 = " + entry.getValue()));  //throwing exception here
> qryCursor1.close();
>
>
> System.out.println("Filter data retrieved");
>
> } catch (Exception e) {
> System.out.println("Error connecting to client, program will
> exit");
> e.printStackTrace();
> }
> }
> }
>
> Now in the above codes, connecting to ignite cache and printing of the
> cache
> using for loop is working as expected, but while using Cache.Entry String> in queryIgniteCache() method, its throwing "Failed to deserialize
> object" exception, I am running the program from another EC2 instance's
> session manager terminal, below is exception stack-trace:
>
> Applying filter 1
> Printing filter 1
> Error connecting to client, program will exit
> org.apache.ignite.client.ClientException: Ignite failed to process request
> [2]: Failed to deserialize object
> [typeName=java.lang.invoke.SerializedLambda] (server status code [1])
> at
>
> 

Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-29 Thread Alex Plehanov
Ilya, you can find reproducer in test
IgniteBinaryTest.testBinaryTypeWithIdOfMarshallerHeader
In this thread, we are talking about a workaround for older versions where
this issue is not fixed.

пт, 29 янв. 2021 г. в 11:49, Ilya Kasnacheev :

> Hello!
>
> Do you have reproducer for that entity name issue? I would very much like
> to check that. Why do you need to add 'x' to it?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 28 янв. 2021 г. в 16:26, Maximiliano Gazquez <
> maximiliano@gmail.com>:
>
>> So we fixed it like this
>>
>> private static String fixEntityName(Function
>> nameToTypeId, String name) {
>> String result = name;
>> while ((nameToTypeId.apply(result) & 0x00FF) == 103) {
>> result = "x" + result;
>> }
>> return result;
>> }
>> Where nameToTypeId is BinaryBasicIdMapper#typeId basically and it works.
>>
>> Now, is there any way to calculate the ClientConfiguration size? To avoid
>> the issue mentioned in https://issues.apache.org/jira/browse/IGNITE-13401
>>
>> A lso, we were using
>> thick client but we decided to migrate to thin client because we need
>> interoperability between version. Is there any documentation about this?
>> Detailing compatibility between client/server versions.
>>
>> Thanks!
>> On 28 Jan 2021 9:08 -0300, Pavel Tupitsyn , wrote:
>>
>> Ilya,
>>
>> Normally you can use any combination of thin client and server versions,
>> the highest common protocol version is negotiated automatically
>> during the handshake.
>>
>> There are some exceptions to this - not all thin clients support very old
>> protocols,
>> but with recent versions it should work.
>>
>> On Thu, Jan 28, 2021 at 1:58 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I don't think you can use a more recent version of thin client with
>>> older server version.
>>> New thin client features usually require support from the server side as
>>> well.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> ср, 27 янв. 2021 г. в 21:45, jjimeno :
>>>
 Hello,

 Thanks for your answer.
 The server is already in version 2.9.1 and the c++ thin client is from
 master branch to get transactions support.

 José



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>


Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-28 Thread Alex Plehanov
Hello,

There are no public methods to calculate CacheConfiguration size.

As Pavel already said all versions of the client compatible with all
versions of the server, the features set will be determined by the highest
common version.

Here is the feature compatibility list for the java thin client:
Cache API since 2.5 (server-side since 2.4)
Binary since 2.5 (server-side since 2.4)
Queries since 2.5 (server-side since 2.4)
Authentication since 2.5 (server-side since 2.5)
Transactions since 2.8 (server-side since 2.8)
Expiry policy since 2.8 (server-side since 2.8)
Cluster API since 2.9 (server-side since 2.8)
User attributes since 2.9 (server-side since 2.9)
Cluster groups API since 2.9 (server-side since 2.9)
Compute since 2.9 (server-side since 2.9)
Services since 2.9 (server-side since 2.9)



чт, 28 янв. 2021 г. в 16:26, Maximiliano Gazquez :

> So we fixed it like this
>
> private static String fixEntityName(Function
> nameToTypeId, String name) {
> String result = name;
> while ((nameToTypeId.apply(result) & 0x00FF) == 103) {
> result = "x" + result;
> }
> return result;
> }
> Where nameToTypeId is BinaryBasicIdMapper#typeId basically and it works.
>
> Now, is there any way to calculate the ClientConfiguration size? To avoid
> the issue mentioned in https://issues.apache.org/jira/browse/IGNITE-13401
>
> A lso, we were using
> thick client but we decided to migrate to thin client because we need
> interoperability between version. Is there any documentation about this?
> Detailing compatibility between client/server versions.
>
> Thanks!
> On 28 Jan 2021 9:08 -0300, Pavel Tupitsyn , wrote:
>
> Ilya,
>
> Normally you can use any combination of thin client and server versions,
> the highest common protocol version is negotiated automatically
> during the handshake.
>
> There are some exceptions to this - not all thin clients support very old
> protocols,
> but with recent versions it should work.
>
> On Thu, Jan 28, 2021 at 1:58 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I don't think you can use a more recent version of thin client with older
>> server version.
>> New thin client features usually require support from the server side as
>> well.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 27 янв. 2021 г. в 21:45, jjimeno :
>>
>>> Hello,
>>>
>>> Thanks for your answer.
>>> The server is already in version 2.9.1 and the c++ thin client is from
>>> master branch to get transactions support.
>>>
>>> José
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-27 Thread Alex Plehanov
Hello,

Can you please share your stacktrace? In 2.7.6 I think this bug
affects only cache configuration retrieval and binary metadata retrieval.
To avoid problems with binary metadata retrieval you should not use types
which typeId (IgniteClient.binary().typeId()) starts with byte 103 as cache
keys or values.

If you can't upgrade Ignite server to 2.9.1, perhaps you can upgrade only
the client? This will also solve the problem.

ср, 27 янв. 2021 г. в 19:46, maxi628 :

> Hello everyone.
>
> I'm having an issue that appears randomly on my internal product testing.
> Looks like its related to
> https://issues.apache.org/jira/browse/IGNITE-13401
> but the stacktrace looks different.
> Basically I'm building a BinaryObject which ends up fetching
> BinaryObjectMetadata from the server and it fails with that exception.
>
>
>
> Upgrading to ignite 2.9.1 where it's patched isn't an option for me.
> The BinaryObjectMetadata is user-configured via some interfaces.
> Is there any workarounds for this?
> If we have a way to calculate the size of the metadata before sending it we
> could add a dummy field in those cases where it ends up matching the
> GridBinaryMarshaller.OBJ on the second byte of the stream.
>
> Here's the datagram in hex
>
>
> And a photo from the debugger in which it shows the problematic value
>
> Screen_Shot_2021-01-27_at_10.png
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3058/Screen_Shot_2021-01-27_at_10.png>
>
>
> Thanks everyone.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SqlQuery Deprecated Since 2.8, please use SqlFieldsQuery instead....How to get result as passing model Type with SqlFieldsQuery?

2021-01-21 Thread Alex Plehanov
Hello,

Try this: "select _val from mytesttable"

чт, 21 янв. 2021 г. в 17:38, siva :

> Hi,
> I am using .net Ignite v2.9.1 and
>
> Since SqlQuery deprecated in documentation suggesting please use
> SqlFieldsQuery.
>
> what is the way to get result as Type using sqlfieldsquery?
>
> for example query
> select * from mytesttable
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: removeAll operation on same cache causing deadlock

2021-01-20 Thread Alex Plehanov
Hello,

Unfortunately, for atomic caches in 2.9.1 there still java level deadlock
possible (Ignite system threads can hang) for mass operations (putAll,
removeAll, invokeAll) if an unordered key set is used. See [1]. This issue
will be fixed in 2.10 release.

[1]: https://issues.apache.org/jira/browse/IGNITE-12451

чт, 21 янв. 2021 г. в 09:30, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> Recently, we upgraded our lower environment to Ignite 2.9.1 from 2.7.0 .
> Earlier, we faced one issue of cluster going into hang state, if there are
> multiple threads performing *removeAll*() operation on same cache.
>
> Its mentioned in the release notes that “*Fixed deadlock on concurrent
> removeAll() on the same cache*” this has been fixed. However, we were
> still able to reproduce the issue in this version as well. Can anyone
> please validate this?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Any custom eviction policies to flush data from memory to disk

2021-01-18 Thread Alex Plehanov
Hello, Naveen

In Ignite, when we talk about rotating data between disk and memory in the
data region with native persistence, we call it 'replacement', not
'eviction'.
Currently, there is only one replacement policy - Random-LRU. Perhaps,
there will be more policies soon, but without the ability to implement
custom user policies (see IEP [1] and related ticket [2]).
Replacement works on the page-level, not entry-level. It started when page
memory is full and we need to load some pages from the disk. Ignite doesn't
perform the premature replacement or premature page loads, page loaded and
replacement occurs only on demand. It's unlikely that a custom replacement
policy based on entries will be effective.

[1]:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-62+Page+replacement+improvements
[2]: https://issues.apache.org/jira/browse/IGNITE-13761

вт, 19 янв. 2021 г. в 09:03, Naveen :

> Hi Stephen
>
> on the same mail chain, we also data like OTP (one time passwds) which are
> no relevant after a while, but we dont want to expire or delete them, just
> get them flushed to disk, like wise we do have other requirements where
> data
> is very relevant only for a certain duration, later on its not important.
> Thats the whole idea of exploring eviction policies
>
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: On disk compression

2020-11-17 Thread Alex Plehanov
If you have a write-heavy workload, to reduce disk usage you can also
compress WAL (see "WAL compaction" and "WAL page snapshots compression"
features).
I'm not sure about ZSTD compression levels, you can try it. But there is a
warning in the ZSTD manual: "Levels >= 20 should be used with caution, as
they require more memory". Perhaps someone who is more familiar with ZSTD
will answer how higher compression levels affect resource consumption
during decompression.

вт, 17 нояб. 2020 г. в 11:00, David Tinker :

> Aha! I didn't know about the sparse file thing. Thanks!
>
> # ll -hs
> 159M -rw-r--r-- 1 ignite ignite 339M Nov 16 21:32 part-96.bin
>
> So the real space used is only 159M. That's great. I currently have all of
> this data stored on the filesystem in  csv.gz files using 177M of space for
> the 16000 I tested with.
>
> Any other tips on how to reduce disk usage? Any point in using compression
> level more than 18 for ZSTD? Most of this data will only be written once so
> I am not so concerned about write speed.
>
>
> On Tue, Nov 17, 2020 at 9:34 AM Alex Plehanov 
> wrote:
>
>> Hello,
>>
>> Ignite compresses each page individually. The result of whole file
>> compression will always be better than the result of each individual page
>> compression. Moreover, Ignite stores compressed pages only if the page size
>> shrunk by one or more filesystem blocks. So, for example, if you have fs
>> block size 4K, page size 16Kb and after compression your page size is 13Kb,
>> then the page will be stored without compression.
>>
>> BTW, how do you check file size? Ignite compression uses sparse files.
>> "ls -l" reports allocated file size and doesn't utilize information about
>> "holes" in a sparse file. To see the real amount of disk space occupied by
>> the file you should use "du" or "ls -s".
>>
>>
>> вт, 17 нояб. 2020 г. в 06:18, David Tinker :
>>
>>> I have enabled compression
>>> (pageSize=16384, diskPageCompression=ZSTD, diskPageCompressionLevel=18) but
>>> the partition files don't appear to be very compressed. I tested by adding
>>> approx 16000 data items to my cache and looking at the partition files on
>>> disk.
>>>
>>> Example: part-96.bin is 339M in size. If I compress that file with zstd
>>> (default settings) it goes down to 106M.
>>>
>>> Is it possible to do better than this with Ignite? I need to be able to
>>> store a lot of data.
>>>
>>> Thanks
>>> David
>>>
>>> Relevant parts of my ignite config:
>>>
>>> >> class="org.apache.ignite.configuration.IgniteConfiguration">
>>> 
>>>
>>> 
>>> >> class="org.apache.ignite.configuration.DataStorageConfiguration">
>>> 
>>> ...
>>> 
>>> 
>>>
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>>
>>>


Re: On disk compression

2020-11-16 Thread Alex Plehanov
Hello,

Ignite compresses each page individually. The result of whole file
compression will always be better than the result of each individual page
compression. Moreover, Ignite stores compressed pages only if the page size
shrunk by one or more filesystem blocks. So, for example, if you have fs
block size 4K, page size 16Kb and after compression your page size is 13Kb,
then the page will be stored without compression.

BTW, how do you check file size? Ignite compression uses sparse files. "ls
-l" reports allocated file size and doesn't utilize information about
"holes" in a sparse file. To see the real amount of disk space occupied by
the file you should use "du" or "ls -s".


вт, 17 нояб. 2020 г. в 06:18, David Tinker :

> I have enabled compression
> (pageSize=16384, diskPageCompression=ZSTD, diskPageCompressionLevel=18) but
> the partition files don't appear to be very compressed. I tested by adding
> approx 16000 data items to my cache and looking at the partition files on
> disk.
>
> Example: part-96.bin is 339M in size. If I compress that file with zstd
> (default settings) it goes down to 106M.
>
> Is it possible to do better than this with Ignite? I need to be able to
> store a lot of data.
>
> Thanks
> David
>
> Relevant parts of my ignite config:
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
> ...
> 
> 
>
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
>
>


Re: Issue with ignite thin client - ReliableChannel

2020-11-16 Thread Alex Plehanov
Hello,

This problem was only in Ignite 2.8 with enabled partition awareness. Fixed
in Ignite 2.8.1, see [1].

[1]: https://issues.apache.org/jira/browse/IGNITE-12743

вт, 17 нояб. 2020 г. в 01:53, Hemambara :

> Ignite thin client create and use ReliableChannel which starts below async
> thread, but while closing the channel it is not shutting down this executor
> service thread. This should be fine if jvm shutsdown, but it will be a
> problem for applications sharing jvm node. Ex: 1 JVM has multiple
> applications on app server like Jboss or Mule and each application can be
> deployed or undeployed independently. This is not jvm shutdown, it is just
> unloading classloader or closing spring context. In this case we are seeing
> this background thread is still hanging after closing spring context and
> cannot be stopped. Can you please create a jira, I can pick work on that
>
> Probably in onClose method() we need to call shutdown or shutdownNow()
> based
> on your advise
>
> private final ExecutorService asyncRunner =
> Executors.newSingleThreadExecutor(
> new ThreadFactory() {
> @Override public Thread newThread(@NotNull Runnable r) {
> return new Thread(r, "thin-client-channel-async-runner");
> }
> }
> );
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [ignite 2.9.0] thin clients cannot access the Ignite Service deployed through UriDeploymentSpi( java.lang.ClassNotFoundException)

2020-11-05 Thread Alex Plehanov
Hello,

Thanks for the report, I will try to fix it shortly.

ср, 4 нояб. 2020 г. в 12:35, 18624049226 <18624049...@163.com>:

> Hi community,
>
> The operation steps are as follows:
>
> 1.use ignite.sh  example-deploy.xml start a server node
>
> 2.Put the service jar package in the /home/test/deploy directory
>
> 3.Deploy services using DeployClient
>
> 4.If you use ThickClientTest and ThinClientTest to access the service
> respectively, you will find that the ThickClientTest access is
> successful, but the ThinClientTest access is abnormal. The error is
> java.lang.ClassNotFoundException.
>
> See ticket below for details:
>
> https://issues.apache.org/jira/browse/IGNITE-13633
>
>
>


Re: How to get column names for a query in Ignite thin client mode

2020-11-04 Thread Alex Plehanov
Currently, only field names can be obtained, there is no information about
field data types in thin client protocol.

ср, 4 нояб. 2020 г. в 13:58, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Ilya and Alex,
>
> Thank you for information.
> Can you please also suggest how to get the datatypes of those columns
> obtained from the query?
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>
> --
> *From:* Alex Plehanov 
> *Sent:* Tuesday, November 3, 2020 12:13 PM
> *To:* user@ignite.apache.org 
> *Subject:* Re: How to get column names for a query in Ignite thin client
> mode
>
> Columns information is read by thin-client only after the first data
> request, so you need to read at least one row to get columns.
>
> вт, 3 нояб. 2020 г. в 09:31, Ilya Kazakov :
>
> Hello, Shravya! It is very interesting! I am trying to reproduce your
> case, and what I see. I can see column names in the thin client only after
> query execution.
>
> For example:
>
> ClientConfiguration clientConfig = new 
> ClientConfiguration().setAddresses("127.0.0.1");
> try(IgniteClient thinClient = Ignition.startClient(clientConfig)){
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM T1");
> FieldsQueryCursor cursor = thinClient.query(sql);
> cursor.getAll();
> int count = cursor.getColumnsCount();
> System.out.println(count);
> List columnNames = new ArrayList<>();
> for (int i = 0; i < count; i++) {
> String columnName = cursor.getFieldName(i);
> columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
> }
>
>
> But if this is the correct behavior I do not know yet, I will try to find
> out.
>
> 
> Ilya Kazakov
>
> вт, 3 нояб. 2020 г. в 12:51, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
> Hi,
>
> *For Ignite thick client, the column names for a given sql query are
> coming up as expected with the following code:*
> public class ClientNode {
>
> public static void main(String[] args) {
> IgniteConfiguration igniteCfg = new IgniteConfiguration();
> igniteCfg.setClientMode(true);
>
> Ignite ignite = Ignition.start(igniteCfg);
> *IgniteCache foo **= ignite.getOrCreateCache("foo");*
>
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
> *FieldsQueryCursor cursor = foo.query(sql);*
> int count = cursor.getColumnsCount();
> List columnNames = new ArrayList<>();
>
> for (int i = 0; i < count; i++) {
>   String columnName = cursor.getFieldName(i);
>   columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
>
>  } }
>  *Output:*
>  *columnNames:::[ID, NAME, LAST_NAME, AGE, CITY_ID, EMAIL_ID]
> *
> *On the other hand, for thin client, the column names are coming up as empty 
> list.*
> The following is the code:
> public class ClientNode {
>
> public static void main(String[] args) {
> ClientConfiguration clientConfig = new ClientConfiguration();
> cc.setUserName("username");
> cc.setUserPassword("password");
>
> *IgniteClient thinClient = Ignition.startClient(clientConfig);*
>
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
> *FieldsQueryCursor cursor = thinClient.query(sql);*
> int count = cursor.getColumnsCount();
> List columnNames = new ArrayList<>();
>
> for (int i = 0; i < count; i++) {
>   String columnName = cursor.getFieldName(i);
>   columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
>
>  } }
>
> *Output:**columnNames:::[ ]*
>
> While using IgniteCache.query(SqlFieldsQuery), the column names are
> coming up. But while using IgniteClient.query(SqlFieldsQuery), the column
> names are not coming up. Are we missing any configurations? Is there
> something wrong in the code? And also is there anyway in which we can
> identify the datatype of columns given in the query! We are looking for
> the datatype of the columns in the query but not the datatype of columns in
> the table!
>
> Any help here will be much appreciated!
> Thanks in advance!
>
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>


Re: How to get column names for a query in Ignite thin client mode

2020-11-02 Thread Alex Plehanov
Columns information is read by thin-client only after the first data
request, so you need to read at least one row to get columns.

вт, 3 нояб. 2020 г. в 09:31, Ilya Kazakov :

> Hello, Shravya! It is very interesting! I am trying to reproduce your
> case, and what I see. I can see column names in the thin client only after
> query execution.
>
> For example:
>
> ClientConfiguration clientConfig = new 
> ClientConfiguration().setAddresses("127.0.0.1");
> try(IgniteClient thinClient = Ignition.startClient(clientConfig)){
> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM T1");
> FieldsQueryCursor cursor = thinClient.query(sql);
> cursor.getAll();
> int count = cursor.getColumnsCount();
> System.out.println(count);
> List columnNames = new ArrayList<>();
> for (int i = 0; i < count; i++) {
> String columnName = cursor.getFieldName(i);
> columnNames.add(columnName);
> }
> System.out.println("columnNames:::"+columnNames);
> }
>
>
> But if this is the correct behavior I do not know yet, I will try to find
> out.
>
> 
> Ilya Kazakov
>
> вт, 3 нояб. 2020 г. в 12:51, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
>> Hi,
>>
>> *For Ignite thick client, the column names for a given sql query are
>> coming up as expected with the following code:*
>> public class ClientNode {
>>
>> public static void main(String[] args) {
>> IgniteConfiguration igniteCfg = new IgniteConfiguration();
>> igniteCfg.setClientMode(true);
>>
>> Ignite ignite = Ignition.start(igniteCfg);
>> *IgniteCache foo **= ignite.getOrCreateCache("foo");*
>>
>> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
>> *FieldsQueryCursor cursor = foo.query(sql);*
>> int count = cursor.getColumnsCount();
>> List columnNames = new ArrayList<>();
>>
>> for (int i = 0; i < count; i++) {
>>   String columnName = cursor.getFieldName(i);
>>   columnNames.add(columnName);
>> }
>> System.out.println("columnNames:::"+columnNames);
>>
>>  } }
>>  *Output:*
>>  *columnNames:::[ID, NAME, LAST_NAME, AGE, CITY_ID, EMAIL_ID]
>> *
>> *On the other hand, for thin client, the column names are coming up as empty 
>> list.*
>> The following is the code:
>> public class ClientNode {
>>
>> public static void main(String[] args) {
>> ClientConfiguration clientConfig = new ClientConfiguration();
>> cc.setUserName("username");
>> cc.setUserPassword("password");
>>
>> *IgniteClient thinClient = Ignition.startClient(clientConfig);*
>>
>> SqlFieldsQuery sql = new SqlFieldsQuery("SELECT * FROM person");
>> *FieldsQueryCursor cursor = thinClient.query(sql);*
>> int count = cursor.getColumnsCount();
>> List columnNames = new ArrayList<>();
>>
>> for (int i = 0; i < count; i++) {
>>   String columnName = cursor.getFieldName(i);
>>   columnNames.add(columnName);
>> }
>> System.out.println("columnNames:::"+columnNames);
>>
>>  } }
>>
>> *Output:**columnNames:::[ ]*
>>
>> While using IgniteCache.query(SqlFieldsQuery), the column names are
>> coming up. But while using IgniteClient.query(SqlFieldsQuery), the
>> column names are not coming up. Are we missing any configurations? Is there
>> something wrong in the code? And also is there anyway in which we can
>> identify the datatype of columns given in the query! We are looking for
>> the datatype of the columns in the query but not the datatype of columns in
>> the table!
>>
>> Any help here will be much appreciated!
>> Thanks in advance!
>>
>>
>>
>> Regards,
>>
>> Shravya Nethula,
>>
>> BigData Developer,
>>
>>
>> Hyderabad.
>>
>>


Re: [External]Re: Usage of TransactionConfiguration to overcome deadlocked threads

2020-10-21 Thread Alex Plehanov
If you have user-defined types and can sort it by yourself, you can use
LinkedHashMap as an argument to preserve the order and avoid deadlocks.

ср, 21 окт. 2020 г. в 09:44, Kamlesh Joshi :

> Thanks for the update Alex.
>
>
>
> Actually we are using BinaryObjects for such operations. Is there any
> implementation available (or a reference) to sort user defined types of
> objects ?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Alex Plehanov 
> *Sent:* 20 October 2020 19:54
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: Usage of TransactionConfiguration to overcome
> deadlocked threads
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello,
>
>
>
> TransactionConfiguration property has nothing to do with atomic caches.
> Perhaps your threads were deadlocked due to atomic putAll/removeAll
> operations with an unordered set of keys. It's a known issue and I hope
> will be fixed soon. See [1] for detailed information. Until this ticked is
> fixed you should avoid concurrent putAll/removeAll operations with an
> unordered set of keys on atomic caches (putAll with HashMap as an argument,
> for example).
>
>
>
> [1]: https://issues.apache.org/jira/browse/IGNITE-12451
>
>
>
> вт, 20 окт. 2020 г. в 12:16, Kamlesh Joshi :
>
> Hi Igniters,
>
>
>
> We are currently using ATOMIC caches for our operations. Recently, we
> observed cluster hang issue, the operations were stuck for quite a long
> time (had to bring down the cluster to resolve this). So, after some
> digging found that setting up below property should resolve this. Could you
> please confirm on below:
>
>
>
>1. Whether this needs to be set on both Ignite servers and Ignite
>thick clients?
>2. Or setting on cluster should suffice?
>3. What should be the optimum value for *defaultTxTimeout*
>
>
>
>
>
> **
>
> * class="org.apache.ignite.configuration.TransactionConfiguration">*
>
> **
>
> **
>
> **
>
>
>
>
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Usage of TransactionConfiguration to overcome deadlocked threads

2020-10-20 Thread Alex Plehanov
Hello,

TransactionConfiguration property has nothing to do with atomic caches.
Perhaps your threads were deadlocked due to atomic putAll/removeAll
operations with an unordered set of keys. It's a known issue and I hope
will be fixed soon. See [1] for detailed information. Until this ticked is
fixed you should avoid concurrent putAll/removeAll operations with an
unordered set of keys on atomic caches (putAll with HashMap as an argument,
for example).

[1]: https://issues.apache.org/jira/browse/IGNITE-12451

вт, 20 окт. 2020 г. в 12:16, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> We are currently using ATOMIC caches for our operations. Recently, we
> observed cluster hang issue, the operations were stuck for quite a long
> time (had to bring down the cluster to resolve this). So, after some
> digging found that setting up below property should resolve this. Could you
> please confirm on below:
>
>
>
>1. Whether this needs to be set on both Ignite servers and Ignite
>thick clients?
>2. Or setting on cluster should suffice?
>3. What should be the optimum value for *defaultTxTimeout*
>
>
>
>
>
> **
>
> * class="org.apache.ignite.configuration.TransactionConfiguration">*
>
> **
>
> **
>
> **
>
>
>
>
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Ignite Thin Client - Compute Jobs

2020-09-21 Thread Alex Plehanov
Hello,

This feature was implemented for java and .Net thin clients in Ignite
version 2.9, but this version is not released yet.

пн, 21 сент. 2020 г. в 09:51, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> Hi,
>
> WIth a thin client handle,  is it possible to launch tasks on the compute
> grid?
>
> regards
> mahesh
>
>


Re: How to confirm that disk compression is in effect?

2020-09-09 Thread Alex Plehanov
Hello.

Actually performance test results attached to IGNITE-11336 are not correct,
I forgot to delete these results from the ticket. The environment was not
tuned correctly and I get too often checkpoints for runs without WAL
compression and this leads to bad results for "compdisabled" runs. But I've
benchmarked it again later and there is still performance boost about 30%
on our synthetic tests on our environment. Also, benchmarks were based on
early 2.8 Ignite version, later, in 2.8 some optimizations were introduced,
which reduced the count of page snapshots in the WAL. So, currently, you
can still get several percents (it depends on your data and your
environment) performance boost by using WAL page snapshot compression, but
don't expect 2x or more.


ср, 9 сент. 2020 г. в 04:37, 38797715 <38797...@qq.com>:

> Hi,
>
> I tried to test the following scenario, but it didn't seem to improve.
>
> pageSize=4096 & wal compression enabled & COPY command import for 6M data
>
>
> I've looked at the following discussion and performance test results, and
> it seems that the throughput has been improved by 2x-4x.
>
> https://issues.apache.org/jira/browse/IGNITE-11336
>
>
> http://apache-ignite-developers.2346864.n4.nabble.com/Disk-page-compression-for-Ignite-persistent-store-td38009.html
>
> According to my understanding, the execution time of the copy command
> should be greatly reduced, but this is not the case. Why?
> 在 2020/9/8 下午5:16, Ilya Kasnacheev 写道:
>
> Hello!
>
> If your data does not compress at least 2x, then pageSize=8192 is useless.
> Frankly speaking I've never seen any beneficial deployments of page
> compression. I recommend turning it off and keeping WAL compression only.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 8 сент. 2020 г. в 05:18, 38797715 <38797...@qq.com>:
>
>> Hi Ilya,
>>
>> This module has already been imported.
>>
>> We re tested three scenarios:
>>
>> 1.pageSize=4096
>>
>> 2.pageSize=8192
>>
>> 3.pageSize=8192,disk compression and wal compression are enabled at the
>> same time.
>>
>> From the test results, pageSize = 4096, the writing speed of this
>> scenario is slightly faster, and the disk space occupation is slightly
>> smaller, but the amplitude is less than 10%.
>>
>> In the two scenarios with pageSize = 8192, there is no big difference in
>> write speed and disk space usage. However, for wal files, the size of a
>> single file will always be 64M. It is not clear whether more compressed
>> data is stored in the file.
>>
>> My test environment is:
>>
>> For notebook computers (8G RAM, 256G SSD), Apache ignite version is
>> 2.8.1, and the COPY command is used to import 6M data.
>> 在 2020/9/7 下午10:06, Ilya Kasnacheev 写道:
>>
>> Hello!
>>
>> Did you add `ignite-compres` module to your classpath?
>>
>> Have you tried WAL compression instead? Please check
>> https://apacheignite.readme.io/docs/write-ahead-log#section-wal-records-compression
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 28 авг. 2020 г. в 06:52, 38797715 <38797...@qq.com>:
>>
>>> Hi,
>>>
>>> create table statement are as follows:
>>> CREATE TABLE PI_COM_DAY
>>> (COM_ID VARCHAR(30) NOT NULL ,
>>> ITEM_ID VARCHAR(30) NOT NULL ,
>>> DATE1 VARCHAR(8) NOT NULL ,
>>> KIND VARCHAR(1),
>>> QTY_IOD DECIMAL(18, 6) ,
>>> AMT_IOD DECIMAL(18, 6) ,
>>> QTY_PURCH DECIMAL(18, 6) ,
>>> AMT_PURCH DECIMAL(18,6) ,
>>> QTY_SOLD DECIMAL(18,6) ,
>>> AMT_SOLD DECIMAL(18, 6) ,
>>> AMT_SOLD_NO_TAX DECIMAL(18, 6) ,
>>> QTY_PROFIT DECIMAL(18, 6) ,
>>> AMT_PROFIT DECIMAL(18, 6) ,
>>> QTY_LOSS DECIMAL(18,6) ,
>>> AMT_LOSS DECIMAL(18, 6) ,
>>> QTY_EOD DECIMAL(18, 6) ,
>>> AMT_EOD DECIMAL(18,6) ,
>>> UNIT_COST DECIMAL(18,8) ,
>>> SUMCOST_SOLD DECIMAL(18,6) ,
>>> GROSS_PROFIT DECIMAL(18, 6) ,
>>> QTY_ALLOCATION DECIMAL(18,6) ,
>>> AMT_ALLOCATION DECIMAL(18,2) ,
>>> AMT_ALLOCATION_NO_TAX DECIMAL(18, 2) ,
>>> GROSS_PROFIT_ALLOCATION DECIMAL(18,6) ,
>>> SUMCOST_SOLD_ALLOCATION DECIMAL(18,6) ,
>>> PRIMARY KEY (COM_ID,ITEM_ID,DATE1)) WITH
>>> "template=cache-partitioned,CACHE_NAME=PI_COM_DAY";
>>> CREATE INDEX IDX_PI_COM_DAY_ITEM_DATE ON PI_COM_DAY(ITEM_ID,DATE1);
>>>
>>> I don't think there's anything special about it.
>>> Then we imported 10 million data using the COPY command.Data is
>>> basically the actual production data, I think the dispersion is OK, not
>>> artificial data with high similarity.
>>> I would like to know if there are test results for the function of disk
>>> compression? Most of the other memory databases also have the function of
>>> data compression, but it doesn't look like it is now, or what's wrong with
>>> me?
>>>
>>> 在 2020/8/28 上午12:39, Michael Cherkasov 写道:
>>>
>>> Could you please share your benchmark code? I believe compression might
>>> depend on data you write, if it full random, it's difficult to compress the
>>> data.
>>>
>>> On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com> wrote:
>>>
 Hi,

 We turn on disk compression to see the trend of execution time and disk
 space.

 

Re: Reconnect is not allowed due to applied throttling

2020-08-04 Thread Alex Plehanov
Hello,

Ignite thin client applies connection throttling if you have several
failed attempts to connect to some server (this server will be skipped for
some time to avoid waiting for connection timeouts on each attempt to
connect, the client will try to connect to the next servers if they are
configured).
By default, if your server is unavailable and you have 3 in a row
ClientConnectionException during 30 seconds on the same channel throttling
will be applied. The next real attempt to connect to this server will be
made after 30 seconds from the time of the first exception.
Connection throttling parameters can be changed by ClientConfiguration (see
reconnectThrottlingPeriod and reconnectThrottlingRetries).


вт, 4 авг. 2020 г. в 13:37, AravindJP :

> Hi ,
> I am getting below exception after running an Ignite client running
> continuously for 48 hrs . My client is running on java (spring boot) . I am
> using singleton  instance of IgniteClient object to connect and persist
> data.
>
> Ignite version 2.8.0
>
> 020-08-04 10:02:18.520 ERROR 1 --- [sub-subscriber3]: Exception with
> org.
> apache.ignite.client.ClientConnectionException: Reconnect is not allowed
> due
> to applied throttling
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableCha
> nnel.java:448)
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableCha
> nnel.java:439)
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.access$100(ReliableChannel.jav
> a:395)
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel.channel(ReliableChannel.java:318)
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:158)
> at
>
> org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:187)
> at
>
> org.apache.ignite.internal.client.thin.TcpClientCache.putAll(TcpClientCache.java:210)
> a
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: DataRegion size with node start

2020-07-07 Thread Alex Plehanov
Hello,

For in-memory data reagions DataRegionConfiguration.getInitialSize() will
be allocated on node startup.
Additional memory will be allocated by chunks of some size on demand until
DataRegionConfiguration.getMaxSize() reached.

вт, 7 июл. 2020 г. в 16:17, 배혜원 :

> Hello!
>
> but I’m not use persistent cache.
> Then, what happen?
>
> 나의 iPhone에서 보냄
>
> > 2020. 7. 7. 오후 7:43, akurbanov  작성:
> >
> > Hello kay,
> >
> > The data region memory specified will be allocated as soon as you will
> start
> > your first persistent cache.
> >
> > Best regards,
> > Anton
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Out of memory error in data region with persistence enabled

2020-06-16 Thread Alex Plehanov
Raymond,

When a checkpoint is triggered you need to have some amount of free page
slots in offheap to save metadata (for example free-lists metadata,
partition counter gaps, etc). The number of required pages depends on count
of caches, count of partitions, workload, and count of CPUs. In worst
cases, you will need up to 256*caches*partitions*CPU count of pages only to
store free-list buckets metadata. This number of pages can't be calculated
statically, so the exact amount can't be reserved in advance. Currently,
1/4 of offheap memory is reserved for this purpose (when amount of dirty
pages riches 3/4 of total number of pages checkpoint is triggered), but
sometimes it's not enough.

In your case, 64Mb data-region is allocated. Page size is 16Kb, so you have
a total of about 4000 pages (real page size in offheap is a little bit
bigger than configured page size). Checkpoint is triggered by "too many
dirty page" event, so 3/4 of pages are already dirty. And only 1000 pages
left to store metadata, it's too small. If page size is 4kb the amount of
clean pages is 4000, so your reproducer can pass in some circumstances.

Increase data region size to solve the problem.


вт, 16 июн. 2020 г. в 05:39, Raymond Wilson :

> I have spent some more time on the reproducer. It is now very simple and
> reliably reproduces the issue with a simple loop adding slowly growing
> entries into a cache with no continuous query ro filters. I have attached
> the source files and the log I obtain when running it.
>
> Running from a clean slate (no existing persistent data) this reproducer
> exhibits the out of memory error when adding an element 4150 bytes in size.
>
> I did find this SO article (
> https://stackoverflow.com/questions/55937768/ignite-report-igniteoutofmemoryexception-out-of-memory-in-data-region)
> that describes the same problem. The solution offered was to increase the
> empty page pool size so it is larger than the biggest element being added.
> The empty pool size should always be bigger than the largest element added
> in the reproducer until the point of failure where 4150 bytes is the
> largest size being added. I tried increasing it to 200, it made no
> difference.
>
> The reproducer is using a pagesize of 16384 bytes.
>
> If I set the page size to the default 4096 bytes this reproducer does not
> show the error up to the size limit of 1 bytes the reproducer tests.
> If I set the page size to 8192 bytes the reproducer does reliably fail
> with the error at the item with 6941 bytes.
>
> This feels like a bug in handling non-default page sizes. Would you
> recommend switching from 16384 bytes to 4096 for our page size? The reason
> I opted for the larger size is that we may have elements ranging in size
> from 100's of bytes to 100Kb, and sometimes larger.
>
> Thanks,
> Raymond.
>
>
> On Thu, Jun 11, 2020 at 4:25 PM Raymond Wilson 
> wrote:
>
>> Just a correction to context of the data region running out of memory:
>> This one does not have a queue of items or a continuous query operating on
>> a cache within it.
>>
>> Thanks,
>> Raymond.
>>
>> On Thu, Jun 11, 2020 at 4:12 PM Raymond Wilson <
>> raymond_wil...@trimble.com> wrote:
>>
>>> Pavel,
>>>
>>> I have run into a different instance of a memory out of error in a data
>>> region in a different context from the one I wrote the reproducer for. In
>>> this case, there is an activity which queues items for processing at a
>>> point in the future and which does use a continuous query, however there is
>>> also significant vanilla put/get activity against a range of other caches..
>>>
>>> This data region was permitted to grow to 1Gb and has persistence
>>> enabled. We are now using Ignite 2.8
>>>
>>> I would like to understand if this is a possible failure mode given that
>>> the data region has persistence enabled. The underlying cause appears to be
>>> 'Unable to find a page for eviction'. Should this be expected on data
>>> regions with persistence?
>>>
>>> I have included the error below.
>>>
>>> This is the initial error reported by Ignite:
>>>
>>> 2020-06-11 12:53:35,082 [98] ERR [ImmutableCacheComputeServer] JVM will
>>> be halted immediately due to the failure: [failureCtx=FailureContext
>>> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException:
>>> Failed to find a page for eviction [segmentCapacity=13612, loaded=5417,
>>> maxDirtyPages=4063, dirtyPages=5417, cpPages=0, pinnedInSegment=0,
>>> failedToPrepare=5417]
>>> Out of memory in data region [name=Default-Immutable, initSize=128.0
>>> MiB, maxSize=1.0 GiB, persistenceEnabled=true] Try the following:
>>>   ^-- Increase maximum off-heap memory size
>>> (DataRegionConfiguration.maxSize)
>>>   ^-- Enable Ignite persistence
>>> (DataRegionConfiguration.persistenceEnabled)
>>>   ^-- Enable eviction or expiration policies]]
>>>
>>> Following this error is a lock dump, where this is the only thread with
>>> a lock:(I am assuming the structureId member with the value
>>> 

Re: Check service location possible ?

2020-06-11 Thread Alex Plehanov
Hello,

IgniteServices.service(String name) method return service instance if it's
deployed locally or null.

чт, 11 июн. 2020 г. в 11:16, Mikael :

> Hi!
>
> Is there any good way to detect if a service is running local or remote
> ? I guess I could check the instance of the returned proxy and see if it
> is the implementation class or not, but it feels a bit ugly, is there
> some other nicer way to do this ?
>
>
>


Re: ClientCacheConfiguration with java thin client

2020-05-26 Thread Alex Plehanov
Hello,

By thick client I mean "not by thin client" (sorry for the confusion), i.e.
by client node, by server node, by xml configuration, etc.

вт, 26 мая 2020 г. в 12:49, kay :

> Thank you so much for reply.
>
> I used xml config file for cache Expirypolicy at server node
> like
>
> 
>  factory-method="factoryOf">
> 
> 
> 
> 
> 
> 
> 
> 
>
> it is thick client define?
> what is exactly think client mean?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ClientCacheConfiguration with java thin client

2020-05-26 Thread Alex Plehanov
Hello,

On server-side there is property ExpiryPolicyFactory (not just
ExporyPolicy). Factory, in general, can return different expiry policy
values for each call, so there is no common way to convert factory class to
policy class, except some predefined factory implementations which always
return the same policy values.
So, if you set expiry policy for cache by thin client, such predefined
factory implementation is used and you will get the correct value back by
getExpiryPolicy() method. But null will be returned by this method if you
set your custom expiry policy factory by thick client.

вт, 26 мая 2020 г. в 05:31, kay :

> Hello, I got ClientCacheConfiguration in my application with java thin
> client
> like
>
> ClientCache cache = igniteClient.cache("CACHE_NAME");
> ClientCacheConfiguration test = cache.getConfiguration();
>
>
> There is ExpiryPolicy in 'CACHE_NAME'.
> but I got null for test.getExpiryPolicy()
>
> I found expiryPlc=null variable at ClientCacheConfiguration Object.
> Is it bug?? or something that I have to add ??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite SystemViews with java thin client

2020-05-25 Thread Alex Plehanov
Hello,

You can't get server-side JMX metrics by thin client, but you can query
views by SQL.
Here [1] is an example of querying system views using thin client
(currently system schema was renamed, so you should use setSchema("SYS")
instead of setSchema("IGNITE"))
The full list of system views implemented in Apache Ignite 2.8 can be found
here: [2].

[1]: https://apacheignite-sql.readme.io/docs/system-views#examples

[2]: https://apacheignite.readme.io/docs/system-views

пн, 25 мая 2020 г. в 10:55, kay :

> Hello, I have 4 remote nodes and I use java thin client for put/get data.
>
> I should get Node, Data Region, Cache information(Name, Size, Cache Mode
> Data Eviction Mode and so on..)
>  cause have to make Ignite admin pages.
> I figured out JMX beans and System Views.
> But I don't know how to use JmxSystemViewExporterSpi?? and New metrics
> subsystem?? in my application.
> Is there any way to get Node, Data Region Cache information by those things
> with java thin client??
> if it is, please share a url.
>
> Thank you so much
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Near Cache Support For Thin Clients

2020-05-21 Thread Alex Plehanov
Hello,

I don't think that near cache for thin client on Ignite level it's a
good idea.

Expiration is not the only case here. For thick clients near caches are
transactionally consistent. For thin clients such a guarantee never can be
provided.
Near cache for thin clients will be either too heavy (and this contradicts
thin clients paradigm) or highly specialized (in this case it's better to
implement it on user level).

Also, sometimes many thin clients are used inside one application (inside
one JVM for java thin client). I know deployments where thin client pool
approach or client per thread approach is used. In these cases, it's better
to have one near cache for all clients than have it inside each client.

I think it's better to provide mechanisms like event listeners or
continuous queries to make it possible to implement near caches on user
level with guarantees that best fit user's requirements.

вт, 19 мая 2020 г. в 15:47, Pavel Tupitsyn :

> Ok, thanks for the explanation.
> Yes, this is a good feature, and I've had this in mind for some time.
>
> Ticket filed: https://issues.apache.org/jira/browse/IGNITE-13037
> There are no immediate plans, but I think there is a possibility to
> achieve this by the end of the year.
>
> On Tue, May 19, 2020 at 2:52 PM Marty Jones  wrote:
>
>> The use case is having a local cache that stores most widely used cache
>> items in memory on server instead of having the network expense of pulling
>> them down every time they are requested.  The main thing is the near cache
>> has to support removing cache items that have expired on the server.
>>
>> The best use case I have is a web application that needs a cache item per
>> request.  we would not want to pull the cache item from the cluster every
>> request.   It would be way more efficient for the thin client to have a
>> near cache that would hold "hot" cache items that are requested frequently.
>>
>> On Tue, May 19, 2020 at 3:43 AM Pavel Tupitsyn 
>> wrote:
>>
>>> Can you please describe the use case in more detail?
>>> What do you expect from such a feature?
>>>
>>> On Tue, May 19, 2020 at 2:01 AM martybjo...@gmail.com <
>>> martybjo...@gmail.com> wrote:
>>>
 I wanted to see if there are any plans to support near caches for thin
 clients? I think it would be a great feature. I know I could use it right
 now.
 --
 Sent from the Apache Ignite Users mailing list archive
  at Nabble.com.

>>>


Re: remote ignite server and L4(for loadbalacing) with java thin client

2020-05-06 Thread Alex Plehanov
Hello,

Currently, there is no way to add a new address without the client restart.
But there is a feature under development [1], perhaps, it will be useful
for your case.

[1]:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-44%3A+Thin+client+cluster+discovery

ср, 6 мая 2020 г. в 10:48, kay :

> Hello, Thank you so much!.
>
> I have 1 more question.
>
> I use IgniteClient API in my application.(singleton)
> when i start my application I regist IgniteClient bean using
> SpringFramework
> and declare ClientConfiguration  address like
>
> ClientConfiguration cfg = new
> ClientConfiguration().setAddresses("34.1.21.122:12800",
> "34.1.21.122:12800");
>
>  Is there any way to add new ClientConfiguration address without my
> application restart??
> I'm considering using L4 between my application and ignite server node..
> but
> also L4 should be restart.
>
> Is it possible to auto scale out java client thin level??
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: remote ignite server and L4(for loadbalacing) with java thin client

2020-05-05 Thread Alex Plehanov
Hello,

1. Yes, the same way as with the thick client.
2. Here [1] is a short description of partition awareness future for .NET
thin client (but this description also relevant to other thin clients,
including java thin client). Here [2] is a low-level description (mostly
for Apache Ignite developers).
3. Yes, java thin client stay connected with the selected server node until
connection lost (network segmented, server node is shut down, etc.). Only
after losing connection, client tries to connect to the next server node

[1] https://apacheignite-net.readme.io/docs/thin-client#partition-awareness
[2]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-23%3A+Best+Effort+Affinity+for+thin+clients

вт, 5 мая 2020 г. в 10:15, kay :

> Hello, I'm waiting for reply :)
>
> I use java thin clien.
>
> 1. Can I use java thin client with 'Affinity Collocation'?
> 2. Is there any information about 'partition awareness' ? like, example
> link.
> 3. Ignite thin client on startup select's random server to connect
> available
> address.
> This selection(connect to server node) is not going to be change before
> server node is shut down?? or my application restart?
> I mean if I have only one Ignite thin client Object( like singleton ) in my
> application.
> Is it only use one server node?
>
>
> Thank you so much
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite 2.8.0: Node Metrics System View doesn't exist

2020-04-30 Thread Alex Plehanov
Ivan,

It's IEP-35 related issue, but I don't see any open ticket for these views
[1]. Feel free to create a ticket.

[1]:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?jql=status%20in%20(Open%2C%20%22In%20Progress%22%2C%20%22Patch%20Available%22)%20AND%20labels%20%3D%20IEP-35

чт, 30 апр. 2020 г. в 12:22, Ivan Fedorenkov :

> Thank you Alex! Is there any ticket for adding those views that I can
> track?
>
> чт, 30 апр. 2020 г. в 09:23, Alex Plehanov :
>
>> Hello,
>>
>> There are a set of views which only available via SQL (not moved to the
>> new metrics engine yet), NODE_METRICS is one of them.
>> Full list of such views for Apache Ignite 2.8:
>> NODE_METRICS
>> NODE_ATTRIBUTES
>> BASELINE_NODES
>> LOCAL_CACHE_GROUPS_IO
>>
>>
>> чт, 30 апр. 2020 г. в 08:00, Denis Magda :
>>
>>> Nikolay, could you please join the thread? Probably, that's a known
>>> limitation that should be addressed soon.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Tue, Apr 28, 2020 at 3:42 AM ivan.fedorenkov <
>>> ivan.fedoren...@gmail.com> wrote:
>>>
>>>> Hello!
>>>>
>>>> According to documentation
>>>> https://apacheignite.readme.io/docs/system-views ignite should expose
>>>> the
>>>> Node Metrics System View. However there is no such view in the code (
>>>> org.apache.ignite.spi.systemview.view package). Is it a peace of
>>>> not-implemented-yet feature or is it a bug in the documentation?
>>>>
>>>> Reference to SO question:
>>>>
>>>> https://stackoverflow.com/questions/61410519/apache-ignite-2-8-0-node-metrics-system-view-doesnt-exist/61461243
>>>>
>>>>
>>>> Best regards,
>>>> Ivan Fedorenkov
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>
>>>


Re: Apache Ignite 2.8.0: Node Metrics System View doesn't exist

2020-04-30 Thread Alex Plehanov
Hello,

There are a set of views which only available via SQL (not moved to the new
metrics engine yet), NODE_METRICS is one of them.
Full list of such views for Apache Ignite 2.8:
NODE_METRICS
NODE_ATTRIBUTES
BASELINE_NODES
LOCAL_CACHE_GROUPS_IO


чт, 30 апр. 2020 г. в 08:00, Denis Magda :

> Nikolay, could you please join the thread? Probably, that's a known
> limitation that should be addressed soon.
>
> -
> Denis
>
>
> On Tue, Apr 28, 2020 at 3:42 AM ivan.fedorenkov 
> wrote:
>
>> Hello!
>>
>> According to documentation
>> https://apacheignite.readme.io/docs/system-views ignite should expose the
>> Node Metrics System View. However there is no such view in the code (
>> org.apache.ignite.spi.systemview.view package). Is it a peace of
>> not-implemented-yet feature or is it a bug in the documentation?
>>
>> Reference to SO question:
>>
>> https://stackoverflow.com/questions/61410519/apache-ignite-2-8-0-node-metrics-system-view-doesnt-exist/61461243
>>
>>
>> Best regards,
>> Ivan Fedorenkov
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: remote ignite server and L4(for loadbalacing) with java thin client

2020-04-06 Thread Alex Plehanov
Hello,

AFAIK apache webserver load balancer it's a balancer only for HTTP
requests. Ignite thin client protocol it's not a HTTP based protocol.
Moreover, it's async (you can send several requests on the same channel
before receiving the first response) and it's stateful (for example, if you
start query on one node you can't get the next page for this query on
another node). Webserver load balancer can't be used here.

Ignite thin client on startup select's random server to connect from
available addresses. So, there is already some kind of load balancer on
per-client level (but not per-request).
Also, the "partition awareness" feature (disabled by default) allows to
connect to several nodes at the same time and send some types of requests
to the affinity node.


пн, 6 апр. 2020 г. в 03:33, kay :

> Hi,
>
> I'm using Apache Webserver for loadbalancer.
>
> I got this access log at binding port.
>
>
>
>
> - myIp(xx.xx.xx.xxx) [06/Apr/2020:09:00:50 +0900] 13000 "-" 408 - 12 0 - -
>
>
>
>
> I think it was a timeout.
>
> Here is my httpd.conf set.
>
>
>
>
> 
>  BalancerMember "http://IP:PORT; loadfactor=1 hcpasses=3
> hcfails=3
> hcmethod=GET  hcinterval=5
>  BalancerMember "http://IP:PORT; loadfactor=1 hcpasses=3
> hcfails=3
> hcmethod=GET  hcinterval=5
>  ProxySet  lbmethod=byrequests
>  
>
> 
>  ProxyRequestsOff
> ProxyPreserveHost   On
>
>   ProxyPass   "/"  "balancer://cache-cluster/"
>  ProxyPassReverse"/"  "balancer://cache-cluster/"
>  
>
>
>
>
>
>
>
> and this is a stacktrace for connection
>
> Exception in thread "main"
> org.apache.ignite.client.ClientConnectionException: Ignite cluster is
> unavailable [sock=Socket[addr=intdev01/IP,port=13000,localport=50216]]
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:499)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:491)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.access$100(TcpClientChannel.java:92)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:542)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.handshakeRes(TcpClientChannel.java:428)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.handshake(TcpClientChannel.java:401)
>   at
>
> org.apache.ignite.internal.client.thin.TcpClientChannel.(TcpClientChannel.java:153)
>   at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:450)
>   at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.getOrCreateChannel(ReliableChannel.java:439)
>   at
>
> org.apache.ignite.internal.client.thin.ReliableChannel$ClientChannelHolder.access$100(ReliableChannel.java:395)
>   at
>
> org.apache.ignite.internal.client.thin.ReliableChannel.(ReliableChannel.java:120)
>   at
>
> org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:99)
>   at
>
> org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:81)
>   at
>
> org.apache.ignite.internal.client.thin.TcpIgniteClient.start(TcpIgniteClient.java:208)
>   at org.apache.ignite.Ignition.startClient(Ignition.java:581)
>  at sfmi.framework.nexus.app.Application.main(Application.java:51)
>
>
>
>
>
>
>
> Is is right to use loadbalancer to access Ignite server??
>
>
>
>
> Thank you
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Issues with DataRegionMetrics and DataStorageMetrics

2020-01-30 Thread Alex Plehanov
Hello Mitchell,

It seems that you didn't enable metrics on data storage and data regions.
You can enable it by DataStorageConfiguration.setMetricsEnabled and
DataRegionConfiguration.setMetricsEnabled methods. Also, you can enable it
in runtime by JMX

пт, 31 янв. 2020 г. в 04:25, Mitchell Rathbun (BLOOMBERG/ 731 LEX) <
mrathb...@bloomberg.net>:

> I am hoping to periodically print some Ignite metrics to Grafana. I tried
> printing various DataRegionMetrics and DataStorageMetrics method results
> and they were not what I expected. For the default DataRegionMetrics (the
> only one we are using), getName, getOffHeapUsedSize, and
> getPhysicalMemorySize all return 0, while getOffHeapSize and
> getTotalAllocatedSize return values that make more sense (200 MB for off
> heap (what we allocated on start up), and 1.7 GB for total allocated). For
> DataStorageMetrics, getCheckpointBufferSize, getOffHeapSize,
> getOffHeapUsedSize, getTotalAllocatedSize, and getWalTotalSize all return
> 0, which doesn't make sense given we have 1.7 GB data in the cache. Any
> idea why I am getting 0 for these various metrics? Is there a more reliable
> way to grab Ignite metrics so that I can publish them to a 3rd party
> platform?
>


Re: C# client not able to connect to cluster using IP address

2019-12-24 Thread Alex Plehanov
Hello,

47500 - is a discovery SPI port. For thin client connection you should use
10800 port, or configure another by clientConnectorConfiguration property.

вт, 24 дек. 2019 г. в 10:38, userx :

> Hi team,
>
> I have started an ignite server with the below command on my local machine
>
> ignite.bat C:\..\\..\config\Ignite-config.xml
>
> My config looks like
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd;>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>name="ipFinder">
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>name="addresses">
>
> 
>
> MyHostName:47500..47502
>
> 
>   
> 
>   
>
> 
> 
>
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>  value="48100" />
>  name="localPortRange" value="10" />
>  name="usePairedConnections" value="true">
>  name="messageQueueLimit" value="1024">
>  name="directSendBuffer" value="true">
>  name="selectorsCount" value="4">
>  name="socketWriteTimeout" value="30" />
> 
> 
> 
> 
>
>
> I am running the client code as follows
> Running client code as follows
> class Program
> {
> public static IIgniteClient IgniteThinClient;
> private static IgniteClientConfiguration
> GetIgniteClientConfiguration()
> {
> return new IgniteClientConfiguration
> {
> Host = "MyHostName",
> Port = 47500
> };
> }
> static void Main(string[] args)
> {
> try
> {
> Ignition.ClientMode = true;
> IgniteThinClient =
> Ignition.StartClient(GetIgniteClientConfiguration());
>}
> catch(Exception ex)
> {
> //Log Ex
> }
> }
> }
>
> I cannot see my client connected  and instead see the following logs
> >>> VM name: 195616@MyHostName
> >>> Local node [ID=5032DC52-0036-3BDF5136903, order=1, clientMode=false]
> >>> Local node addresses: [MyHostName/0:0:0:0:0:0:0:1, /127.0.0.1,
> >>> /10.10.17.82]
> >>> Local ports: TCP:10800 TCP:11211 TCP:47500 TCP:48100
>
> [20:29:43,699][INFO][main][GridDiscoveryManager] Topology snapshot [ver=1,
> locNode=5032dc52, servers=1, clients=0, state=ACTIVE, CPUs=8,
> offheap=3.2GB,
> heap=1.0GB]
> [20:30:00,233][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery
> accepted incoming connection [rmtAddr=/10.10.17.82, rmtPort=60732]
> [20:30:00,243][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery
> spawning a new thread for connection [rmtAddr=/10.10.17.82, rmtPort=60732]
> [20:30:00,244][INFO][tcp-disco-sock-reader-#4][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/10.10.17.82:60732, rmtPort=60732]
> [20:30:00,245][WARNING][tcp-disco-sock-reader-#4][TcpDiscoverySpi] Unknown
> connection detected (is some other software connecting to this Ignite port?
> missing SSL configuration on remote node?) [rmtAddr=/10.10.17.82]
> [20:30:00,245][INFO][tcp-disco-sock-reader-#4][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/10.10.17.82:60732, rmtPort=60732
>
> If instead I replace MyHostName with localhost, i see the client getting
> connected on the server. Can you help me out with the issue ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there any way to make sure data and its backups in different data center

2019-10-28 Thread Alex Plehanov
Hello,

You can set custom affinityBackupFilter in RendezvousAffinityFunction. See
[1] for example.

[1] :
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html


пн, 28 окт. 2019 г. в 11:31, c c :

> HI, we are working on ignite v2.7.0 . We plan to deploy 5 server nodes
> across two data center (three in one data center and two in the other). We
> setup 2 backups for each entry, so we have three copies data for each
> entry. How can we make sure there is at least one copy in both data center?
>
> Regards
>
>


Re: Cache Miss Using Thin Client

2019-08-27 Thread Alex Plehanov
Hello,

Most probably the reason is different default values of compactFooter
property of BinaryConfiguration for thin client and server (false for
client, true for server).
Here is a ticket: [1]
You can work around this by setting BinaryConfiguration including this
property explicitly.

[1] : https://issues.apache.org/jira/browse/IGNITE-10960

вт, 27 авг. 2019 г. в 13:02, :

> Hi Stan,
>
>
>
> Yes, the second app I referred to is using Java thing client. My Ignite
> version is 2.7.5, for all apps in question.
>
>
>
> The cache key type is one of our domain objects which has just three
> String variables; I don’t think there’s anything unusual about it. I’m not
> explicitly setting any BinaryConfiguration.
>
>
>
> Regards,
>
>
>
> Simon.
>
>
>
> *From:* Stanislav Lukyanov 
> *Sent:* Friday, August 23, 2019 6:18 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Cache Miss Using Thin Client
>
>
>
> This message originated from outside our organisation and is from web
> based email - stanlukya...@gmail.com
>
> Hi,
>
>
>
> I'm thinking this could be related to differences in the binary marshaller
> configuration.
>
> Are you using Java thin client? What version? What is the cache key type?
> Are you setting a BinaryConfiguration explicitly on the client or server?
>
>
>
> Thanks,
>
> Stan
>
>
>
> On Fri, Aug 23, 2019 at 3:38 PM  wrote:
>
> Hello,
>
> I have one Spring Boot app running as a client node which uses
> SpringCacheManager + @Cacheable annotation on a service call. This is
> demonstrating expected read-through behaviour.
>
> I have a second app where I'm trying to implement the same behaviour using
> the thin-client. This is able to successfully "get" entries put in the
> cache through this application but not those using the application above,
> even if the key appears to be the same.
>
> Both applications are using a key class from the same dependency and are
> obviously populated with the same attributes. I've used the "query" method
> on the cache to retrieve all the cache entries, have verified they're using
> the same server node, the entries are there and so on.
>
> Any ideas why the "get" method from thin-client cannot find entries "put"
> by the client node? Or, any suggestions on appropriate logging to assist
> diagnosis?
>
> Thanks,
>
> Simon.
>
> This e-mail and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this e-mail
> in error, please notify the sender immediately, delete it from your system
> and do not copy, disclose or otherwise act upon any part of this e-mail or
> its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays Group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any Internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this e-mail may be monitored by the Barclays Group for operational or
> business reasons.
>
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Barclays Group is personal to the
> sender and is not given or endorsed by the Barclays Group.
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc, Barclays Bank plc and
> Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays
> Bank plc are authorised by the Prudential Regulation Authority and
> regulated by the Financial Conduct Authority and the Prudential Regulation
> Authority. Clydesdale Financial Services Limited is authorised and
> regulated by the Financial Conduct Authority.
>
> This e-mail and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this e-mail
> in error, please notify the sender immediately, delete it from your system
> and do not copy, disclose or otherwise act upon any part of this e-mail or
> its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays Group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any Internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this e-mail may be monitored by the Barclays Group for operational or
> business reasons.
>
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Barclays Group is personal to the
> sender and is not given or endorsed by the Barclays Group.
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. 

Re: Thick client/thin client difference and how to get thin client port

2019-07-03 Thread Alex Plehanov
Hello,

There was a bug [1] with one node thin client configuration, which is fixed
now but was not released yet.
You can try to add one more server address (the same address) as a
workaround. For example:

ClientConfiguration cfg = new
ClientConfiguration().setAddresses(hostName + ":" + portNumber,
hostName + ":" + portNumber);


[1] https://issues.apache.org/jira/browse/IGNITE-11649

вт, 2 июл. 2019 г. в 20:36, Shane Duan :

> I am having similar problem: I am testing with a one-node-cluster. After
> restart the Ignite cluster(server), Java thin client is not reconnecting.
> There is no port change on server for my case since I defined client port
> in the Ignite configuration:
>
> org.apache.ignite.client.ClientConnectionException: Ignite cluster is
> unavailable
> at
> org.apache.ignite.internal.client.thin.TcpClientChannel.read(TcpClientChannel.java:333)
> at
> org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:154)
> at
> org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:126)
> at
> org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:82)
> at
> com.esri.arcgis.datastore.model.cachestore.BinaryTest.main(BinaryTest.java:69)
>
> import org.apache.ignite.IgniteBinary;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.binary.BinaryObject;
> import org.apache.ignite.client.ClientCache;
> import org.apache.ignite.client.IgniteClient;
> import org.apache.ignite.configuration.ClientConfiguration;
>
> public class BinaryTest {
>
>   public static void main(String[] args) {
>
> IgniteClient igniteClient = null;
>
> try{
>
>   // Prepare a Ignite thin client
>   String hostName = args[0];
>   Integer portNumber = Integer.parseInt(args[1]);
>   String userName = args[2];
>   String password = args[3];
>
>   ClientConfiguration cfg = new 
> ClientConfiguration().setAddresses(hostName + ":" + portNumber);
>   cfg.setUserName(userName);
>   cfg.setUserPassword(password);
>
>   igniteClient = Ignition.startClient(cfg);
>
>   // Get IgniteBinary object using the ignite thin client.
>   IgniteBinary binary = igniteClient.binary();
>
>   // Build two test binary object.
>   // Note: we are defining the attributes for the binary object on the 
> fly.
>   BinaryObject val1 = binary.builder("person")
>   .setField("id", 1, int.class)
>   .setField("name", "Tom J", String.class)
>   .build();
>   BinaryObject val2 = binary.builder("person")
>   .setField("id", 2, int.class)
>   .setField("name", "Jeffson L", String.class)
>   .build();
>
>   // Create a cache, keep it as binary.
>   ClientCache cache = 
> igniteClient.getOrCreateCache("persons").withKeepBinary();
>
>   // Store the testing objects.
>   cache.put(1, val1);
>   cache.put(2, val2);
>
>
>   
>   // Please restart Ignite server here.
>   
>
>   // Get the objects.
>   BinaryObject cachedVal1 = cache.get(1);
>   System.out.println("Person1");
>   System.out.println("\tID   = " + cachedVal1.field("id"));
>   System.out.println("\tName = " + cachedVal1.field("name"));
>
>   BinaryObject cachedVal2 = cache.get(2);
>   System.out.println("Person2");
>   System.out.println("\tID   = " + cachedVal2.field("id"));
>   System.out.println("\tName = " + cachedVal2.field("name"));
>
>   // Destroy the cache.
>   System.out.print("Dropped caches...");
>   igniteClient.destroyCache("persons");
>
> } catch (Exception e) {
>   e.printStackTrace();
> } finally {
>   if (igniteClient != null) {
> try {
>   igniteClient.close();
> } catch (Exception ignore) {
> }
>   }
> }
>   }
>
> }
>
>
> On Mon, Jul 1, 2019 at 5:38 AM Alex Plehanov 
> wrote:
>
>> Hello, Rick
>>
>> There should be a message in the server log:
>> "Client connector processor has started on TCP port x" with "INFO"
>> level.
>>
>> пн, 1 июл. 2019 г. в 10:26, rick_tem :
>>
>>> Thanks for your response.  I tried 10801 and it failed.  My question is
>>> how
>>> do I find out which port it is using?
>>>
>>> Thanks,
>>> Rick
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Thick client/thin client difference and how to get thin client port

2019-07-01 Thread Alex Plehanov
Hello, Rick

There should be a message in the server log:
"Client connector processor has started on TCP port x" with "INFO"
level.

пн, 1 июл. 2019 г. в 10:26, rick_tem :

> Thanks for your response.  I tried 10801 and it failed.  My question is how
> do I find out which port it is using?
>
> Thanks,
> Rick
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to know memory used by a cache or a set

2019-06-19 Thread Alex Plehanov
Denis,

Documentation for memory usage calculation covers another case (memory
usage by the node).
There is no ability (AFAIK) in released Ignite versions to calculate memory
used by a cache or cache group when persistence is disabled.
Dedicated data region can be used for some of the caches in some cases and
metrics can be collected for this data region, but when the cache is
destroyed (or data is deleted) memory is not deallocated, it's going to
reuse list.
There is a new metric implemented DataRegionMetrics#getTotalUsedPages
(count of allocated pages minus count of pages in the reuse list) which
will partially help to solve Yann's problem, but this metric will be
available only in the next Ignite release.
Also, as a temporary workaround, some internal API can be used to get a
count of pages in the reuse list and calculate total used pages by data
region manually.

ср, 19 июн. 2019 г. в 07:59, Denis Magda :

> + dev list
>
> Ignite developers,
>
> Seems that the present solution for memory calculation doesn't work (check
> the thread):
>
> https://apacheignite.readme.io/v2.5/docs/memory-metrics#section-memory-usage-calculation
> <https://apacheignite.readme.io/docs/cache-metrics>
>
> Was it really broken?
>
> --
> Denis Magda
>
>
> On Thu, Jun 13, 2019 at 9:45 AM yann Blazart 
> wrote:
>
> > Ok, but I can't create dynamically a data region ? Because each time I
> > receive a new file to process, I create a cachegroup to handle it, then I
> > clean everything.
> >
> > Le jeu. 13 juin 2019 à 13:28, Alex Plehanov  a
> > écrit :
> >
> >> Hello,
> >>
> >> It's a known issue [1]. Now you can get cache group size via JMX only if
> >> persistence is used.
> >> If persistence is not used you can get allocated size only for data
> >> region (but you can have more then one data region and assign cache
> groups
> >> to data regions any way you want)
> >>
> >> [1] : https://issues.apache.org/jira/browse/IGNITE-8517
> >>
> >> ср, 12 июн. 2019 г. в 16:00, yann.blaz...@externe.bnpparibas.com <
> >> yann.blaz...@externe.bnpparibas.com>:
> >>
> >>> Hello, I'm back.
> >>>
> >>> Well, I need to get memory used by each execution of my process, so I
> >>> put all involved caches into the same cacheGroup.
> >>>
> >>> If I use the CacheGroupMetricsBean, the size gave to me is 0 !
> >>> If I enable persistence on DataRegion, I get size, but I don't want to
> >>> use the persistence enabled.
> >>>
> >>> Is it a bug ?
> >>>
> >>> regards.
> >>>
> >>> > Le 27 mai 2019 à 11:09, ibelyakov  a
> écrit
> >>> :
> >>> >
> >>> > Hi,
> >>> >
> >>> > Did you turn on cache metrics for your data region?
> >>> >
> >>> > To turn the metrics on, use one of the following approaches:
> >>> > 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every
> >>> region you
> >>> > want to collect the metrics for.
> >>> > 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by
> a
> >>> > special JMX bean.
> >>> >
> >>> > More information regarding cache metrics available here:
> >>> > https://apacheignite.readme.io/docs/cache-metrics
> >>> >
> >>> > Regards,
> >>> > Igor
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>>
> >>> This message and any attachments (the "message") is
> >>> intended solely for the intended addressees and is confidential.
> >>> If you receive this message in error,or are not the intended
> >>> recipient(s),
> >>> please delete it and any copies from your systems and immediately
> notify
> >>> the sender. Any unauthorized view, use that does not comply with its
> >>> purpose,
> >>> dissemination or disclosure, either whole or partial, is prohibited.
> >>> Since the internet
> >>> cannot guarantee the integrity of this message which may not be
> >>> reliable, BNP PARIBAS
> >>> (and its subsidiaries) shall not be liable for the message if modified,
> >>> changed or falsified.
> >>> Do not print this message unless it is necessary, consider the
> >>> environment.
> >>>
> >>

Re: How to know memory used by a cache or a set

2019-06-13 Thread Alex Plehanov
Hello,

It's a known issue [1]. Now you can get cache group size via JMX only if
persistence is used.
If persistence is not used you can get allocated size only for data region
(but you can have more then one data region and assign cache groups to data
regions any way you want)

[1] : https://issues.apache.org/jira/browse/IGNITE-8517

ср, 12 июн. 2019 г. в 16:00, yann.blaz...@externe.bnpparibas.com <
yann.blaz...@externe.bnpparibas.com>:

> Hello, I'm back.
>
> Well, I need to get memory used by each execution of my process, so I put
> all involved caches into the same cacheGroup.
>
> If I use the CacheGroupMetricsBean, the size gave to me is 0 !
> If I enable persistence on DataRegion, I get size, but I don't want to use
> the persistence enabled.
>
> Is it a bug ?
>
> regards.
>
> > Le 27 mai 2019 à 11:09, ibelyakov  a écrit :
> >
> > Hi,
> >
> > Did you turn on cache metrics for your data region?
> >
> > To turn the metrics on, use one of the following approaches:
> > 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every region
> you
> > want to collect the metrics for.
> > 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by a
> > special JMX bean.
> >
> > More information regarding cache metrics available here:
> > https://apacheignite.readme.io/docs/cache-metrics
> >
> > Regards,
> > Igor
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
> This message and any attachments (the "message") is
> intended solely for the intended addressees and is confidential.
> If you receive this message in error,or are not the intended recipient(s),
> please delete it and any copies from your systems and immediately notify
> the sender. Any unauthorized view, use that does not comply with its
> purpose,
> dissemination or disclosure, either whole or partial, is prohibited. Since
> the internet
> cannot guarantee the integrity of this message which may not be reliable,
> BNP PARIBAS
> (and its subsidiaries) shall not be liable for the message if modified,
> changed or falsified.
> Do not print this message unless it is necessary, consider the environment.
>
>
> --
>
> Ce message et toutes les pieces jointes (ci-apres le "message")
> sont etablis a l'intention exclusive de ses destinataires et sont
> confidentiels.
> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
> merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
> immediatement l'expediteur. Toute lecture non autorisee, toute utilisation
> de
> ce message qui n'est pas conforme a sa destination, toute diffusion ou
> toute
> publication, totale ou partielle, est interdite. L'Internet ne permettant
> pas d'assurer
> l'integrite de ce message electronique susceptible d'alteration, BNP
> Paribas
> (et ses filiales) decline(nt) toute responsabilite au titre de ce message
> dans l'hypothese
> ou il aurait ete modifie, deforme ou falsifie.
> N'imprimez ce message que si necessaire, pensez a l'environnement.
>


Re: ./control.sh --baseline doesn't list nodes on a linux machines

2018-09-14 Thread Alex Plehanov
In your previous post was a message: "Current topology version: 20". It
does not look like a cluster with 3 nodes just started. So, I think the
cluster was not restarted correctly, it still was in compatibility mode.
In your last post, it was restarted correctly and not in compatibility mode
anymore. But if you connect visor to the cluster, it will be switched to
compatibility mode again.

пт, 14 сент. 2018 г. в 11:09, es70 :

> UPDATE to the previous post - after my last message I started the visor and
> saw that there were some ignite processes running. Looks like that started
> some before. So I killed those leftovers and started the cluster from
> fresh.
> Now I can see the baseline
>
> [root@adp-apachelg01 bin]# ./control.sh --baseline
> Control utility [ver. 2.6.0#20180710-sha1:669feacc]
> 2018 Copyright(C) Apache Software Foundation
> User: root
>
> 
> Cluster state: active
> Current topology version: 3
>
> Baseline nodes:
> ConsistentID=4b799cb2-3953-48fb-b25f-88ac40ce0726, STATE=ONLINE
> ConsistentID=d8acbf66-f6e2-4876-b3f0-caf3846b50f5, STATE=ONLINE
> ConsistentID=dbc9fd4e-f2d0-4e65-85c6-bdd3fdbd7d6d, STATE=ONLINE
>
> 
> Number of baseline nodes: 3
>
> Other nodes not found.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ./control.sh --baseline doesn't list nodes on a linux machines

2018-09-13 Thread Alex Plehanov
There is also the bug (ticket [1]) related to daemon nodes (visor joins the
cluster as a daemon node). Visor switches cluster to compatibility mode, in
this mode baseline topology is not supported. You can read more details
here: [2]. I think this bug can be related to your case too.
Please shutdown visor, restart the cluster and try to print baseline nodes
again.

[1]: https://issues.apache.org/jira/browse/IGNITE-8774
[2]:
http://apache-ignite-users.70518.x6.nabble.com/Node-with-BaselineTopology-cannot-join-mixed-cluster-running-in-compatibility-mode-td22200.html

чт, 13 сент. 2018 г. в 15:54, es70 :

> Alex, please note that I followed the cluster activation routine on a
> windows
> machine before doing the same on a linux and I succeded in activating the
> cluster
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ./control.sh --baseline doesn't list nodes on a linux machines

2018-09-13 Thread Alex Plehanov
Hello Eugene,

Did you activate this cluster before?
Baseline topology can be set explicitly (via control.sh script or Java API)
or implicitly after the first activation. Currently, your cluster is not
activated. If cluster never was activated before and you didn't set
baseline topology explicitly, then the list of baseline nodes will be empty.

чт, 13 сент. 2018 г. в 14:46, es70 :

> I have a cluster of RHEL machines (3 in total) and run Ignite 2.6 on them.
> The nodes seem to see each other since the visor shows them in the TOP
> command so I want to join the nodes to a cluster. I run the ./control.sh
> --baseline command on one of the machines to see the list of the available
> nodes. The command gives me an empty list. -
> [root@adp-apachelg01 bin]# ./control.sh --baseline Control utility [ver.
> 2.6.0#20180710-sha1:669feacc] 2018 Copyright(C) Apache Software Foundation
> User: root
> 
> Cluster state: inactive Current topology version: 4 Baseline nodes not
> found.  The question - what's wrong with it? Please find
> attached the config file which I use to start the nodes cluster-ignite.xml
> 
> cluster-default.xml
> 
> before running the nodes I ran the service.sh file to amend the firewall
> setting service.sh
> 
>  Before running Ignite on the linux machines I
> did the same on the Windows machines - the ./control.sh --baseline command
> gave me the expected listing of the nodes and was able to join them in a
> cluster --- One thing to note - I also did the
> following thing - I started the visor and a node (with the same config as
> above) on my Windows machine. The ./control.sh --baseline command listed me
> the running nodes. The visor showed my Windows nodes Then I started my
> Linux nodes. The control.sh command stopped to show the nodes. The TOP
> command in the Visor showed just my Linus nodes Please advise!! regards,
> Evgeny
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Determine Node Health?

2018-09-05 Thread Alex Plehanov
Hello Chris,

There is no such metric as "node is healthy" now, but each node provides a
lot of low-level metrics such as CPU usage, memory usage, jobs
execution/waiting time etc, which you can combine and define your own
criteria of "healthy node". These metrics available cluster-wide and
contains information for each node, see ClusterGroup#metrics(),
ClusterNode#metrics() methods.


ср, 5 сент. 2018 г. в 0:39, Chris Berry :

> Hi,
>
> We are using an Ignite ComputeGrid, and it is mostly working nicely.
>
> Recently we had a Node with "Noisy Neighbors" in AWS that wrecked havoc in
> our ComputeGrid.
> Even though that Node was quite slow, it was never removed from the
> map/reduce – slowing down all computes.
>
> We have already built a system that allows us to add/subtract Nodes to the
> ComputeGrid based on when they are actually “ready to compute”,
> Because our Nodes take considerable time to be truly ready for computation
> (i.e. quite a bit of prepreparation is required).
> So, to accomplish this, we use a dynamic Ignite ClusterGroup when we create
> the compute.
>
> ```
> ClusterGroup readyNodes =
> readyForComputeMonitor.getNodesReadyForCompute(ignite.cluster());
> log.debug(dumpClusterGroup(readyNodes));
> return ignite.compute(readyNodes);
> ```
>
> So. My question.
> Does Ignite keep any information that we can use to determine if a Node is
> healthy?
> I.e. some way that we can locate any outliers in the ComputeGrid?
>
> For example, the Node in our recent incident was at 100% CPU and was much,
> much slower in the reduce phase.
>
> Any help/advise would be much appreciated.
>
> Thanks,
> -- Chris
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How check if the grid is stopped

2018-08-29 Thread Alex Plehanov
Method ignite.cluster().active() shows whether or not a cluster was
activated.
To check started/stopped state you can use method
Ignition.state(igniteInstanceName).

ср, 29 авг. 2018 г. в 19:36, Paolo Di Tommaso :

> Hi Igniters,
>
> What's the suggested way to check if the grid is stopped. I'm trying
>
> ignite.cluster().active()
>
> but it's throwing the following exception.
>
>
>  java.lang.IllegalStateException: Grid is in invalid state to perform this
> operation. It either not started yet or has already being or have stopped
> [igniteInstanceName=nextflow, state=STOPPED]
>
>
> p
>


Re: Question on DiscoveryCustomEvent

2018-08-28 Thread Alex Plehanov
Hello,

StopRoutineDiscoveryMessage is sent when a continuous query is stopped
(when query cursor is closed) to unregister listener on all nodes.
DiscoveryCustomEvent is an internal API event, what for you listening to
this type of events?

вт, 28 авг. 2018 г. в 12:43, Sashika :

> Hi,
>
> In one of our code segments, we evaluate a condition to check whether the
> event received by local listener is an instance of DiscoveryCustomEvent.
> Then if it is true we get the customMessage from it and check whether the
> message is an instance of StopRoutineDiscoveryMessage. Our Ignition.start()
> will start in client mode which will connect to an Ignite cluster.
>
> My question is what is the condition that we receive a
> StopRoutineDiscoveryMessage?. Can't find any more information online apart
> from the source code of StopRoutineDiscoveryMessage.java which does not
> have any description given.
>
> Any help would be greatly appreciated
>


Re: defaultDataRegionConfiguration is per-node or per-cluster?

2018-08-23 Thread Alex Plehanov
Hi,

maxSize and initialSize are per-node values.
In your example, if you have more then one node with this configuration,
each node will allocate 16GB off-heap memory.

чт, 23 авг. 2018 г. в 18:04, yfernando :

> Hi All,
>
> When setting the defaultDataRegionConfiguration parameter in the
> IgniteConfiguration, are the maxSize and  initialSize values per-node or
> cluster-wide?
>
> Ex: specifying 16GB for the Default_Region.
>
> Is the 16GB reserved on the node or reserved across the entire cluster?
>
>   class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>  name="defaultDataRegionConfiguration">
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  value="Default_Region"/>
>  value="#{16L * 1024 * 1024 *1024}"/>
>  name="initialSize" value="#{16L * 1024 * 1024 *1024}"/>
> 
> 
> 
> .
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread Alex Plehanov
I think you are confusing the "current cluster topology" and the "baseline
topology" terms. Baseline topology cannot contain client nodes. Current
baseline topology can be printed or changed by "control.sh" script.
You can read more about baseline topology here: [1]

[1]: https://apacheignite.readme.io/docs/baseline-topology

вт, 21 авг. 2018 г. в 19:40, scottmf :

> Hi Alex, Yes, the baseline topology contains both the server nodes and the
> client node. Everything is looks as I would expect including the IP
> addresses.
>
> A couple more notes:
>
>- when the nodes get restarted the IP addresses will disappear from
>the baseline topology, then will come back. The IP address may change when
>it comes back but the filesystem will still be intact.
>- once the second node is back up the cache comes back as well,
>meaning that the client can successfully query it again. The client is not
>able to query the cache while the first node is down when this exception
>occurs. I'm confused how that could be the case. I even experimented with
>changing my cache to be simply REPLICATED, rather than PARTITIONED, with
>backups > 1
>- in my igniteConfiguration I do set the number of threads in the
>public and system thread pools to 2 and the rebalance pool size to 1 as
>this is a simple dev setup Any idea?
>--
>Sent from the Apache Ignite Users mailing list archive
> at Nabble.com.
>
>


Re: Failed to map keys for cache (all partition nodes left the grid)

2018-08-21 Thread Alex Plehanov
Hello,

Does baseline topology include both of your server nodes?

2018-08-21 3:32 GMT+03:00 scottmf :

> Hi, I'm seeing the errors below in my ignite cluster. This occurs
> consistently in my env when I apply a "rolling upgrade" where I take down
> one node at a time by updating them with a new jar file then start it up.
> Once the node is up I then apply this to the next node and so on.
>
> Notes:
> - The cluster is using persistence and the cache is partitioned with one
> backup.
> - The error below is from a client node that is trying to access its
> cache.
> - I see that all nodes in the cluster are using the same topology version
> and they are all sync'd - they show two server nodes and one client node.
>
> here is the DataStorageConfiguration:
>
> DataStorageConfiguration dataStorageConfiguration = new 
> DataStorageConfiguration();
> 
> dataStorageConfiguration.setDefaultDataRegionConfiguration(defaultDataRegion);
> dataStorageConfiguration.setPageSize(8192);
> dataStorageConfiguration.setMetricsEnabled(true);
> dataStorageConfiguration.setWriteThrottlingEnabled(true);
> 
> dataStorageConfiguration.setStoragePath("/var/lib/ignite/persistence");
> dataStorageConfiguration.setWalMode(WALMode.LOG_ONLY);
> dataStorageConfiguration.setWalFlushFrequency(2000);
> dataStorageConfiguration.setWalPath("/var/lib/ignite/wal");
> 
> dataStorageConfiguration.setWalArchivePath("/var/lib/ignite/wal/archive");
>
> Any idea of what could be happening?
>
> [priority='ERROR' thread='http-nio-8080-exec-9' 
> class='org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet]@182']
>  Servlet.service() for servlet [dispatcherServlet] in context with path [] 
> threw exception [Request processing failed; nested exception is 
> org.apache.ignite.cache.CacheServerNotFoundException: Failed to map keys for 
> cache (all partition nodes left the grid) [topVer=AffinityTopologyVersion 
> [topVer=27, minorTopVer=0], cache=helloCache]] with root cause
> org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException: 
> Failed to map keys for cache (all partition nodes left the grid) 
> [topVer=AffinityTopologyVersion [topVer=27, minorTopVer=0], cache=helloCache]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.serverNotFoundError(GridPartitionedSingleGetFuture.java:711)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture.java:332)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:216)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:208)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync0(GridDhtAtomicCache.java:1391)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1600(GridDhtAtomicCache.java:130)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:469)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$16.apply(GridDhtAtomicCache.java:467)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:756)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.getAsync(GridDhtAtomicCache.java:467)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(GridCacheAdapter.java:4556)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4537)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1350)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:907)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:608)
>  ~[ignite-core-2.6.0.jar:2.6.0]
>   at com.example.cache.ignite.IgniteCache.get(IgniteCache.java:67) 
> ~[ignite-cache-manager-1.5.14.1.15-SNAPSHOT.jar:?]
>   at 
> 

Re: Confusion/inaccurate visor stats

2018-08-17 Thread Alex Plehanov
Hello, Eugene!

> 2) How come Non-heap memory usage is minimal?
"Non-heap memory" here it's JVM managed memory regions other then heap used
for internal JVM purposes (JIT compiler, method area, etc.), it's not a
memory used by Ignite to store data (information about this memory can be
obtained by data region metrics).

> 3) How can I tell how much memory the table is consuming?
AFAIK there is no such functionality in visor now. There is JMX metric
CacheGroupMetricsMXBean#getTotalAllocatedSize, which can help you, but
unfortunately there is the bug in current implementation of this metric
with persistent store enabled (ticket [1], already fixed, fix will be
available in Ignite 2.7) and there still no implementation of this metric
with persistence disabled (ticket [2]).

[1]: https://issues.apache.org/jira/browse/IGNITE-8515
[2]: https://issues.apache.org/jira/browse/IGNITE-8517


2018-08-17 18:37 GMT+03:00 Alexey Kuznetsov :

> Hi!
>
> > 1) In Data region metrics, why is everything 0?
> Did you enable metrics?
>
> See:
>   DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration();
>   dataRegionCfg.setMetricsEnabled(true);
>
> > 4) Total busy time is 15s, the upload took longer than that.
> This is actually time spend in compute engine see:
> org.apache.ignite.cluster.ClusterMetrics
> /**
>  * Gets total time this node spent executing jobs.
>  * @return Total time this node spent executing jobs.
>  */
> public long getTotalBusyTime();
>
> I hope 2) & 3) will answer some one who knows about it.
>
> --
> Alexey Kuznetsov
>
>


Re: why my ignite cluster went to compatibility mode

2018-08-17 Thread Alex Plehanov
Hi, Huang,

Already was discussed here [1]. You probably run visor (daemon node) to
join cluster, which switch cluster to compatibility mode. You can restart
to switch it to normal state again. Fix (ticket IGNITE-8774) will be
available in Ignite 2.7

[1]:
http://apache-ignite-users.70518.x6.nabble.com/Node-with-BaselineTopology-cannot-join-mixed-cluster-running-in-compatibility-mode-td22200.html

2018-08-17 10:36 GMT+03:00 Huang Meilong :

> Hi all,
>
>
> I'm new to ignite, I started a ignite cluster with three nodes
> yesterday(with command: ./ignite.sh -v -np /root/apache-ignite-fabric-2.
> 6.0-bin/examples/config/persistentstore/examples-persistent-store.xml), I
> found one node is down without any log today, and when I try to restart the
> lost node, it say that cluster is in compatibility mode and can not join
> new node. How can I restart the new node?
>
>
> """
>
> [15:19:14,299][INFO][tcp-disco-sock-reader-#5][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/172.16.157.129:34695,
> rmtPort=34695]
> [15:19:14,414][SEVERE][tcp-disco-msg-worker-#3][TcpDiscoverySpi]
> TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
> in order to prevent cluster wide instability.
> class org.apache.ignite.IgniteException: Node with BaselineTopology
> cannot join mixed cluster running in compatibility mode
> at org.apache.ignite.internal.processors.cluster.
> GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.
> java:714)
> at org.apache.ignite.internal.managers.discovery.
> GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processMessage(ServerImpl.java:2744)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processMessage(ServerImpl.java:2536)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> MessageWorkerAdapter.body(ServerImpl.java:6775)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.body(ServerImpl.java:2621)
> at org.apache.ignite.spi.IgniteSpiThread.run(
> IgniteSpiThread.java:62)
> [15:19:14,423][SEVERE][tcp-disco-msg-worker-#3][] Critical system error
> detected. Will be handled accordingly to configured handler [hnd=class
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node
> with BaselineTopology cannot join mixed cluster running in compatibility
> mode]]
> class org.apache.ignite.IgniteException: Node with BaselineTopology
> cannot join mixed cluster running in compatibility mode
> at org.apache.ignite.internal.processors.cluster.
> GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.
> java:714)
> at org.apache.ignite.internal.managers.discovery.
> GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processMessage(ServerImpl.java:2744)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processMessage(ServerImpl.java:2536)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> MessageWorkerAdapter.body(ServerImpl.java:6775)
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.body(ServerImpl.java:2621)
> at org.apache.ignite.spi.IgniteSpiThread.run(
> IgniteSpiThread.java:62)
> [15:19:14,424][SEVERE][tcp-disco-msg-worker-#3][] JVM will be halted
> immediately due to the failure: [failureCtx=FailureContext
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node
> with BaselineTopology cannot join mixed cluster running in compatibility
> mode]]
>
> """
>
>
> Thanks,
>
> Huang
>


Re: Node with BaselineTopology cannot join mixed clusterrunningincompatibility mode

2018-08-13 Thread Alex Plehanov
You can restart the cluster. Compatibility mode flag is not stored to PDS,
it exists only in memory.

2018-08-13 9:06 GMT+03:00 arunkjn :

> Thanks Alex,
>
> How can I fix this in the current cluster without resetting the persistence
> state of entire cluster?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Node with BaselineTopology cannot join mixed clusterrunningincompatibility mode

2018-08-10 Thread Alex Plehanov
Hi, Arun

When daemon node (visor) first time join the cluster it switch the cluster
to compatibility mode. In this mode node containing baseline topology
information will fail to join even if daemon node already leave the
cluster. This issue was fixed by ticket [1] and will be available in Ignite
2.7

[1] : https://issues.apache.org/jira/browse/IGNITE-8774

2018-08-10 14:22 GMT+03:00 arunkjn :

> Hi,
>
> I am experiencing the same issue with a ignite 2.5 cluster deployed in
> kubernetes.
>
> Scenario:
>
> 5 server nodes hosting some caches. They are deployed multiple times
> without
> issues.
> I connected a visor client to this cluster (this is the first time i used
> visor ever on this cluster)
> On another rolling update, the nodes fail to restart with the message -
> class org.apache.ignite.IgniteException: Node with BaselineTopology cannot
> join mixed cluster running in compatibility mode
> at
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.
> onGridDataReceived(GridClusterStateProcessor.java:714)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.
> onExchange(GridDiscoveryManager.java:883)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processNodeAddedMessage(ServerImpl.java:4354)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2744)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2536)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(
> ServerImpl.java:6775)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(
> ServerImpl.java:2621)
> at org.apache.ignite.spi.IgniteSpiThread.run(
> IgniteSpiThread.java:62)
>
>
> I can also see this in the logs -
>
> class org.apache.ignite.IgniteCheckedException: Failed to start SPI:
> TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
> marsh=JdkMarshaller
> [clsFilter=org.apache.ignite.internal.IgniteKernal$5@3ed03652],
> reconCnt=10,
> reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false, internalLsnr=null]
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.
> startSpi(GridManagerAdapter.java:300)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(
> GridDiscoveryManager.java:915)
> at
> org.apache.ignite.internal.IgniteKernal.startManager(
> IgniteKernal.java:1720)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:1033)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:2014)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1723)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1151)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:1069)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:955)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:854)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:724)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:693)
> at org.apache.ignite.Ignition.start(Ignition.java:352)
> at
> com.mediaiq.caps.platform.choreography.data.DataNodeStartup.main(
> DataNodeStartup.java:43)
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Thread has been
> interrupted.
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.
> joinTopology(ServerImpl.java:938)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl.
> spiStart(ServerImpl.java:373)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> spiStart(TcpDiscoverySpi.java:1948)
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.
> startSpi(GridManagerAdapter.java:297)
> ... 13 more
>
>
> I am not sure if visor was still connected while deploying, but the problem
> seem to persist after disconnecting visor as well. Please advise on how to
> fix this. Has this been fixed in 2.6?
>
> Thanks,
> Arun
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Need help for setting offheap memory

2018-08-04 Thread Alex Plehanov
Offheap and heap memory regions are used for different purposes and can't
replace each other. You can't get rid of OOME in heap by increasing offheap
memory.
Can you provide full exception stack trace?

2018-08-03 20:55 GMT+03:00 Amol Zambare :

> Thanks Alex and Denis
>
> We have configured off heap memory to 100GB and we have 10 nodes ignite
> cluster. However when we are running spark job we see following error in
> the ignite logs. When we run the spark job heap utilization on most of the
> ignite nodes is increasing significantly though we are using off heap
> storage. We have set JVM heap size on each ignite node to 50GB. Please
> suggest.
>
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>  at java.util.Arrays.copyOf(Arrays.java:3332)
>  at java.lang.AbstractStringBuilder.ensureCapacityInternal(
> AbstractStringBuilder.java:124)
>
>
> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov 
> wrote:
>
>>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
>> memory allocated for data regions. "Non-heap memory" returned by visor
>> it's JVM managed memory regions other then heap used for internal JVM
>> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>> data region related metrics in visor were implemented in Ignite 2.4.
>>
>> [1] https://docs.oracle.com/javase/8/docs/api/java/lang/manageme
>> nt/MemoryMXBean.html
>>
>
>


Re: Need help for setting offheap memory

2018-08-03 Thread Alex Plehanov
  "Non-heap memory ..." metrics in visor have nothing to do with offheap
memory allocated for data regions. "Non-heap memory" returned by visor it's
JVM managed memory regions other then heap used for internal JVM purposes
(JIT compiler, etc., see [1]). Memory allocated in offheap by Ignite for
data regions (via "unsafe") not included into this metrics. Some data
region related metrics in visor were implemented in Ignite 2.4.

[1]
https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html


Re: Toad Connection to Ignite

2018-08-02 Thread Alex Plehanov
As far as I know TOAD support only limited set of DBMS.
For SQL querying you can use any other tool which support JDBC, like
DBeaver (GUI) or sqlline (CLI, included into Ignite distributive).

2018-08-02 13:48 GMT+03:00 ApacheUser :

> Hello Ignite Team,
>
> Is it possible to connect to Ignite from TOAD tool? for SQL Querying?.
>
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: control.sh does not support SSL

2018-07-20 Thread Alex Plehanov
Yes, you're right, control.sh does not support SSL.

2018-07-20 10:24 GMT+03:00 Paul Anderson :

> Well.. It appears not to, can you confirm pls.
>


Re: Cache size in offheap mem in bytes

2018-06-28 Thread Alex Plehanov
It's incorrect to use cache object to calculate cache data size. What you
got now is a footprint of the Ignite infrastructure used to manage your
data, but not a footprint of your data itself, since data are stored in
off-heap and this tool only calculate on-heap size of objects referenced by
cache object. As you can see in your sample 1 000 000 objects was created
with 5 longs each, but you got only  2 872 685 longs in footprint.

To roughly estimate off-heap memory needed for cache data you can enable
persistent, put large amount of entries to the cache and then measure size
of cache folder (you should also wait until checkpoint complete to get
correct results). Also page in off-heap memory have some extra header, so
final off-heap memory needed will be a little bit larger then cache folder
size.


2018-06-27 17:55 GMT+03:00 Prasad Bhalerao :

> Hi,
>
> I have written a test program to find the cache size in bytes. I am using
> this  tool
> (memory-measurer) to find the object sizes and its foot prints. I tried to
> use this tool to find the cache size in bytes. I am using on heap cache.
>
> Am I using the correct object (cache in this case) to check the cache size
> in memory? Can some please clarify?
>
> I tried to use Jprofiler in instrumentation mode to check on-heap cache
> size, but could not locate the exact class/objects.
>
> private void pushData(CacheName cacheName, List DataKey>> datas) {
>
>   final IgniteCache cache = ignite.cache(cacheName.name());
>
>   LOGGER.info("cache size in bytes : {}", 
> IpV4RangeTest.bytesToHuman(MemoryMeasurer.measureBytes(cache)));
>
>   TestDataObj data8 = null;
>   for(int ctr=0;ctr<1_000_000;ctr++){
> data8 = new TestDataObj();
> data8.setId(ctr);
> data8.setSubscriptionId(SUBSCRIPTION_ID);
> cache.put(data8.getKey(), data8);
>   }
>
>   LOGGER.info("TestDataObj size in bytes : {}",
>   IpV4RangeTest.bytesToHuman(MemoryMeasurer.measureBytes(data8)));
>
>   LOGGER.info("cache size in bytes : {}",
>   IpV4RangeTest.bytesToHuman(MemoryMeasurer.measureBytes(cache)));
>
>   Footprint footprint = ObjectGraphMeasurer.measure(cache);
>
>   LOGGER.info("{} Footprint={}", cacheName.name(), footprint.toString());
>
>   LOGGER.info("{} size={}", cacheName.name(), cache.size());
>   try {
> Thread.sleep(10);
>   } catch (InterruptedException e) {
> e.printStackTrace();
>   }
> }
>
>
>
> *  Output:*
>
>   *cache size in bytes* : 36.4 MB  *[Empty Cache]*
>
> * TestDataObj size in bytes : 64 byte  cache size in bytes* : 493.23 MB
>   *[After adding 1 million objects]*
>   *Footprint*=Footprint
>   [objects=10962325,
>   references=30951850,
>   primitives={byte=130277671, long=2872685, double=58, float=52015,
> int=11651118, boolean=2156105, char=1905788, short=10457}]
>
>
> *Inside TestDataObj  class:*
>
> public class TestDataObj implements Data {
>
>   @QuerySqlField
>   private long id;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ag_domain_nb_override_data", order = 1)})
>   private long assetGroupDomainId;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ag_domain_nb_override_data", order = 2)})
>   private long startIp;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ag_domain_nb_override_data", order = 3)})
>   private long endIp;
>
>   private long subscriptionId;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ag_domain_nb_override_data", order = 4)})
>   private int partitionId;
>
>   private boolean isUpdatedData;
>
> }
>
>
>
> Thanks,
> PRasad
>
> On Wed, Jun 27, 2018 at 3:00 PM dkarachentsev 
> wrote:
>
>> 1) This applicable to Ignite. As it grown from GridGain sometimes it may
>> appear in docs, because missed fro removal.
>> 2) Yes, and I would say overhead could be even bigger. But anyway I cannot
>> say definitely how much, because Ignite doesn't store data sequentially,
>> there a lot of nuances.
>> 3) Ignite transaction caches entries on-heap and this works only for
>> TRANSACTIONAL caches.
>>
>> Thanks!
>> -Dmitry
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Error: Failed to parse query: SELECT * FROM ...

2018-03-22 Thread Alex Plehanov
In case of Cache_A and Cache_B you need to use quote symbols, since table
schema is case-sensitive, but parser convert schemas in your query to upper
case.
Try these statements:
SELECT * FROM "Cache_B".B0;
SELECT * FROM "Cache_A".A;


2018-03-21 20:27 GMT+03:00 Wilhelm :

> Super - Thanks a lot Alex.
>
> So the first one works well (sqlline, dbeaver and java)
> SELECT * FROM B.B0;
>
> But the second one failed (dbeaver and sqlline),
> SELECT * FROM A.A;
>
> So I thought may be if the cache name is different from object type the it
> might work
> so instead of "Cache/Schema.ObjectClass" like A.A or B.B0 I use Cache_A.A
> and Cache_B.B0 but that is failing
> The only thing I changed in the code is
> private static String PREFIXTABLE =
> "Cache_";//IgniteSQL.class.getSimpleName() + "Cache_";
>
> ++--
> --++-+
> |   TABLE_CAT|  TABLE_SCHEM   |
> TABLE_NAME   |   TABLE_TYPE|
> ++--
> --++-+
> || Cache_B| B0
> | TABLE   |
> || Cache_A| A
> | TABLE   |
> ++--
> --++-+
> 0: jdbc:ignite:thin://127.0.0.1/> SELECT * FROM Cache_A.A;
> Error: Failed to parse query. Schema "CACHE_A" not found; SQL statement:
> SELECT * FROM Cache_A.A [90079-195] (state=42000,code=0)
> java.sql.SQLException: Failed to parse query. Schema "CACHE_A" not found;
> SQL statement:
> SELECT * FROM Cache_A.A [90079-195]
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:648)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:130)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute(JdbcThinStatement.java:299)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
> 0: jdbc:ignite:thin://127.0.0.1/> SELECT * FROM Cache_B.B0;
> Error: Failed to parse query. Schema "CACHE_B" not found; SQL statement:
> SELECT * FROM Cache_B.B0 [90079-195] (state=42000,code=0)
> java.sql.SQLException: Failed to parse query. Schema "CACHE_B" not found;
> SQL statement:
> SELECT * FROM Cache_B.B0 [90079-195]
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(
> JdbcThinConnection.java:648)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute0(JdbcThinStatement.java:130)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.
> execute(JdbcThinStatement.java:299)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
> 0: jdbc:ignite:thin://127.0.0.1/>
>
>
> Thanks
> w
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Error: Failed to parse query: SELECT * FROM ...

2018-03-21 Thread Alex Plehanov
Hello, Wilhelm!

Default SQL schema is PUBLIC, when you create tables via SQL or query
tables without specifying schema - default schema is used. When you create
tables via java - cache name is used as schema. Your sqlline output shows
that there are two tables A and B0 were created in schemas A and
B respectively.

Try to use this SQL to query your data:
SELECT * FROM B.B0;
SELECT * FROM A.A;


2018-03-21 2:51 GMT+03:00 Wilhelm :

> Hello,
>
> I'm playing with ignite to evaluate it, basically I want to test how class
> inheritance/polymorphism works with the cache (and SQL search-able).
> To test well this I'm trying to look at the result it produce in DBeaver
> and/or sqlline.sh
>
> Note: I'm using version 2.3.0 (same result on version 2.4.0)
>
> If I create a tables and add data from dbeaver everything is fine, I can
> see
> the tables and data.
> If I create the tables and data from java code then I can see the tables
> from dbeaver or sqlline but if I add some data I can see it from java but I
> do not from dbeaver or sqlline, any idea?
>
> Here is an example:
> Note: I put a breakpoint to the println so I can test before it clears the
> cache.
>
> public interface IA {
> double get_fa_1();
> void set(double d);
> }
>
> public class A implements IA {
> @QuerySqlField(index = true)
> public double fa_1 =3;
> @QuerySqlField
> public B0 _b = null;
> @QuerySqlField
> public String name = "A";
>
> public A(double d){ fa_1 = d;}
> public double get_fa_1(){return fa_1; }
> public void set(double d){fa_1 = d;}
>
> }
>
> public class B0{
> @QuerySqlField
> public double fb0_1 =5;
> @QuerySqlField
> public double fb0_2 =10;
> @QuerySqlField
> public String name = "B0";
>
> public B0(double b1, double b2){ fb0_1=b1; fb0_2=b2;}
> }
>
> public class B10 extends B0 {
> @QuerySqlField
> public double fb10_1 = 6;
> @QuerySqlField
> public String name = "B10";
>
> public B10(double b1, double b2) {super(b1, b2);}
> }
>
> public class IgniteATest {
>
> private static final Logger LOG =
> LoggerFactory.getLogger(IgniteAna.class);
> private static String PREFIXTABLE = "";//IgniteSQL.class.
> getSimpleName()
> + "Cache_";
>
> private static Ignite ignite;
>
> public static void main(String[] args) {
> try {
>
> //Ignition.setClientMode(false);
> ignite = Ignition.start("ignite-docker.xml");
> //ignite.active(true);
>
>
> CacheConfiguration a_cacheCfg = new
> CacheConfiguration<>(PREFIXTABLE+"A");
>
> a_cacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
> a_cacheCfg.setIndexedTypes(Long.class, A.class);
>
>
> CacheConfiguration, B0> b_cacheCfg = new
> CacheConfiguration<>(PREFIXTABLE+"B");
>
> b_cacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
> b_cacheCfg.setIndexedTypes(AffinityKey.class, B0.class);
>
> IgniteCache cache_a =
> ignite.getOrCreateCache(a_cacheCfg);
> IgniteCache, B0> cache_b =
> ignite.getOrCreateCache(b_cacheCfg);
>
>
> A a = new A(0);
> a._b = new B0(0.2, 0.3);
> cache_a.put(1L, a);
> cache_b.put(new AffinityKey(1L), a._b);
>
> a = new A(1);
> a._b = new B10(1.2, 1.3);
> cache_a.put(2L, a);
> cache_b.put(new AffinityKey(2L), a._b);
>
>  /*   a = new A(2);
> a._b = new B11(2.2, 2.3);
> cache_a.put(3L, a);
> cache_b.put(new AffinityKey(3L), a._b);
>
> a = new A(3);
> a._b = new B2(3.2, 3.3);
> cache_a.put(4L, a);
> cache_b.put(new AffinityKey(4L), a._b);
> */
> A a0 = cache_a.get(1L);
>
> System.out.println(a0);
>
> } catch (Exception ex) {
> LOG.error("Got exception: {}", ex.getMessage(), ex);
> } finally {
> ignite.destroyCache(PREFIXTABLE+"A");
> ignite.destroyCache(PREFIXTABLE+"B");
>
> LOG.info("Caches dropped...");
>
> ignite.close();
> }
> }
>
> Here is what I get from sqlline.sh
> sqlline version 1.3.0
> 0: jdbc:ignite:thin://127.0.0.1/> !table
> ++--
> --++-+
> |   TABLE_CAT|  TABLE_SCHEM   |
> TABLE_NAME   |   TABLE_TYPE|
> ++--
> --++-+
> || A  | A
> | TABLE   |
> || B  | B0
> | TABLE   |
>