Re: [External]Re: unable to start node

2020-09-03 Thread Denis Mekhanikov
Could you provide the schema of the table with all indexed fields?
Is it possible, that you by any chance inserted a String into this cache
instead of a POJO with the fields like the table has?

Denis

чт, 3 сент. 2020 г. в 14:19, Kamlesh Joshi :

> Denis,
>
>
>
> Yes, we do have table connected to that cache (we don’t update anything
> though SQL though) but its not created via CREATE TABLE command. We use
> XMLs to create caches.
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Denis Mekhanikov 
> *Sent:* 03 September 2020 16:35
> *To:* user 
> *Subject:* Re: [External]Re: unable to start node
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Kamlesh,
>
>
>
> So do you have a SQL table connected to this cache? Query entity or table
> created through CREATE TABLE command?
>
>
>
> Denis
>
>
>
> чт, 3 сент. 2020 г. в 12:09, Kamlesh Joshi :
>
> Hi Denis,
>
>
>
> We cleared all the indices of that particular node and tried starting but
> no luck. Shall we do this activity for entire cluster ?
>
>
>
> CustomerCharsCache, we have multiple thick clients which writes to this
> cache. Similarly, we have multiple caches with same setup but never
> observed this issue earlier (in version 2.6.0). We migrated to Ignite 2.7.6
> around 3 months back.
>
>
>
> Please, let me know is there anything we should try to recover the node ?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Denis Mekhanikov 
> *Sent:* 03 September 2020 14:08
> *To:* user 
> *Subject:* Re: [External]Re: unable to start node
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> It seems that there is a String value in the CustomerCharsCache cache that
> is being rebalanced to the new node, where it tries to be indexed.
>
> Kamlesh, do you have an SQL table that stores data in
> the CustomerCharsCache? Did you put any data to this cache using cache API
> directly?
>
>
>
> Denis
>
>
>
> чт, 3 сент. 2020 г. в 02:17, Denis Magda :
>
> @Taras Ledkov , @Igor Seliverstov
> , @Yuriy Gerzhedovich
> , did you guys see this issue before? It
> looks like a bug in the SQL engine.
>
>
>
> Kamlesh, in the meantime, try to follow the procedure described in this
> JIRA ticket (you need to clean the index that we'll be rebuild on the
> startup):
>
> https://issues.apache.org/jira/browse/IGNITE-11252
>
>
>
> Let us know if it worked out for you. We'll document this procedure.
>
>
> -
>
> Denis
>
>
>
>
>
> On Wed, Sep 2, 2020 at 10:07 AM Kamlesh Joshi 
> wrote:
>
>
>
> Hi Tom,
>
>
>
> We are using java 1.8.151 and Ignite 2.7.6.
>
>
>
> Regards,
>
> Kamlesh
>
>
>
>
> --
>
> *From:* Tom Black 
> *Sent:* Wednesday, 2 September, 2020, 1:31 pm
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: unable to start node
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Kamlesh Joshi wrote:
> > */[2020-08-28T19:05:03,296][ERROR][sys-#940187%EDIFReplica%][] JVM will
> > be halted immediately due to the failure: [failureCtx=FailureContext
> > [type=CRITICAL_ERROR, err=org.h2.message.DbException: General error:
> > "class o.a.i.IgniteCheckedException: Unexpected binary object class
> > [type=class java.lang.String]" [5-197]]]/*
>
> what ignite and java version are you using?
>
> regards.
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> onl

Re: [External]Re: unable to start node

2020-09-03 Thread Denis Mekhanikov
Kamlesh,

So do you have a SQL table connected to this cache? Query entity or table
created through CREATE TABLE command?

Denis

чт, 3 сент. 2020 г. в 12:09, Kamlesh Joshi :

> Hi Denis,
>
>
>
> We cleared all the indices of that particular node and tried starting but
> no luck. Shall we do this activity for entire cluster ?
>
>
>
> CustomerCharsCache, we have multiple thick clients which writes to this
> cache. Similarly, we have multiple caches with same setup but never
> observed this issue earlier (in version 2.6.0). We migrated to Ignite 2.7.6
> around 3 months back.
>
>
>
> Please, let me know is there anything we should try to recover the node ?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Denis Mekhanikov 
> *Sent:* 03 September 2020 14:08
> *To:* user 
> *Subject:* Re: [External]Re: unable to start node
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> It seems that there is a String value in the CustomerCharsCache cache that
> is being rebalanced to the new node, where it tries to be indexed.
>
> Kamlesh, do you have an SQL table that stores data in
> the CustomerCharsCache? Did you put any data to this cache using cache API
> directly?
>
>
>
> Denis
>
>
>
> чт, 3 сент. 2020 г. в 02:17, Denis Magda :
>
> @Taras Ledkov , @Igor Seliverstov
> , @Yuriy Gerzhedovich
> , did you guys see this issue before? It
> looks like a bug in the SQL engine.
>
>
>
> Kamlesh, in the meantime, try to follow the procedure described in this
> JIRA ticket (you need to clean the index that we'll be rebuild on the
> startup):
>
> https://issues.apache.org/jira/browse/IGNITE-11252
>
>
>
> Let us know if it worked out for you. We'll document this procedure.
>
>
> -
>
> Denis
>
>
>
>
>
> On Wed, Sep 2, 2020 at 10:07 AM Kamlesh Joshi 
> wrote:
>
>
>
> Hi Tom,
>
>
>
> We are using java 1.8.151 and Ignite 2.7.6.
>
>
>
> Regards,
>
> Kamlesh
>
>
>
>
> --
>
> *From:* Tom Black 
> *Sent:* Wednesday, 2 September, 2020, 1:31 pm
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: unable to start node
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Kamlesh Joshi wrote:
> > */[2020-08-28T19:05:03,296][ERROR][sys-#940187%EDIFReplica%][] JVM will
> > be halted immediately due to the failure: [failureCtx=FailureContext
> > [type=CRITICAL_ERROR, err=org.h2.message.DbException: General error:
> > "class o.a.i.IgniteCheckedException: Unexpected binary object class
> > [type=class java.lang.String]" [5-197]]]/*
>
> what ignite and java version are you using?
>
> regards.
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: NullPointerException when destroying cache

2020-09-03 Thread Denis Mekhanikov
Thanks for the report!
I created a JIRA issue for that:
https://issues.apache.org/jira/browse/IGNITE-13398
This bug doesn't seem to have any impact in your case.

The following case is affected though:
If you have multiple services, one of which has
ServiceConfiguration#cacheName specified:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/services/ServiceConfiguration.html#setCacheName-java.lang.String-
Normally, if you destroy a cache, and some service depends on it, then it
leads to undeployment of this service.
Because of this bug, processing of the cache destruction may be
interrupted, so the service won't be undeployed.

Otherwise this bug doesn't have any negative effects.

Denis

вт, 1 сент. 2020 г. в 11:52, Benjamin :

> Hello,
>
> If I destroy a cache when a node singleton service is deployed, a
> NullPointerException is logged.
> The cache destroying process still terminates successfully though.
> So it does not seems harmful, but it pollutes the logs and I wonder if it's
> really harmless.
>
> This happens in 2.8.0 and 2.8.1, but not in 2.7.5.
>
>
> You can find attached a simple example reproducing this.  Main.java
> 
>
> The error message is the following:
> Sep 01, 2020 10:46:35 AM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Failed to notify direct custom event listener:
> DynamicCacheChangeBatch
> [id=33788d84471-920edd0a-25b5-4d6d-9684-33f0524e9e0d, reqs=ArrayList
> [DynamicCacheChangeRequest [cacheName=foo, hasCfg=false,
> nodeId=d9234a21-9fa5-48b3-904b-d84a9cee9d25, clientStartOnly=false,
> stop=true, destroy=false, disabledAfterStartfalse]],
> exchangeActions=ExchangeActions [startCaches=null, stopCaches=[foo],
> startGrps=[], stopGrps=[foo, destroy=true], resetParts=null,
> stateChangeRequest=null], startCaches=false]
> java.lang.NullPointerException
> at
>
> org.apache.ignite.internal.processors.service.IgniteServiceProcessor.lambda$processDynamicCacheChangeRequest$6(IgniteServiceProcessor.java:1694)
> at java.util.Collection.removeIf(Collection.java:414)
> at
>
> org.apache.ignite.internal.processors.service.IgniteServiceProcessor.processDynamicCacheChangeRequest(IgniteServiceProcessor.java:1691)
> at
>
> org.apache.ignite.internal.processors.service.IgniteServiceProcessor.access$200(IgniteServiceProcessor.java:108)
> at
>
> org.apache.ignite.internal.processors.service.IgniteServiceProcessor$3.onCustomEvent(IgniteServiceProcessor.java:232)
> at
>
> org.apache.ignite.internal.processors.service.IgniteServiceProcessor$3.onCustomEvent(IgniteServiceProcessor.java:229)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:665)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:528)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2625)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2663)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to persist data only on selected nodes but not all nodes in cluster

2020-09-03 Thread Denis Mekhanikov
When you have persistence configured in your cluster, some set of nodes
form a baseline topology (BLT). Those are the nodes that store the data and
persist it on their disks.
Nodes outside of the BLT can query the data stored on other nodes that are
in the BLT.
Normally nodes are not added to the baseline automatically when they join
the cluster. It requires manual actions or configuration of baseline
auto-adjustment:
https://www.gridgain.com/docs/latest/developers-guide/baseline-topology#baseline-topology-autoadjustment

You can add the nodes that you want to store and persist the data to the
baseline. Others can work as "compute-only" nodes.
If you want to optimize the work of nodes that are outside of the BLT,
consider using a near cache:
https://www.gridgain.com/docs/latest/developers-guide/near-cache

More information about the Baseline Topology feature:
https://www.gridgain.com/docs/latest/developers-guide/baseline-topology

Denis

ср, 2 сент. 2020 г. в 05:45, xingjl6280 :

> Hi team,
>
> My cluster is running in Replicated cache mode, to ensure no data loss if
> any node is down.
>
> Now I'm going to to enable persistence, but I don't want each node to to
> hold a full backup.
> Is it possible to make some of nodes persist, while the rest are running in
> pure memory mode to read and write same set of data?
>
> I'm quite confused with Baseline topology.
> I saw this in documentation:
>
> 
> *Moreover, the cluster can have cluster nodes that are not a part of the
> baseline topology such as:
> Server nodes that either store data in memory or persist it to a 3rd party
> database like RDBMS or NoSQL.*
>
> 
>
>
> But I also find this statement seems conflict to above
>
> 
> *The new node cannot hold data of caches/tables who persist data in Ignite
> persistence.*
>
> 
>
> please kindly advise.
> thank you
>
> Regards
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Issue in Alter Table - Drop Column functionality

2020-09-03 Thread Denis Mekhanikov
I mean, DROP COLUMN, not DROP TABLE.

Denis

чт, 3 сент. 2020 г. в 11:46, Denis Mekhanikov :

> Does the DROP TABLE statement work if you specify the schema name
> explicitly?
> For example, if the table is in the PUBLIC schema, try running the
> following query: ALTER TABLE PUBLIC.person DROP COLUMN (age)
>
> Denis
>
> ср, 2 сент. 2020 г. в 20:27, Shravya Nethula <
> shravya.neth...@aline-consulting.com>:
>
>> Hi,
>>
>> When I am trying the following query from GridGain, it works as expected.
>> *ALTER TABLE person DROP COLUMN (age)*
>>
>> But when I try to execute the same query as a thick client with following
>> Java code, its throwing * IgniteSQLException.*
>> *Java Code:*
>> String sql = "*ALTER TABLE person DROP COLUMN (age)*";
>> FieldsQueryCursor cursor = cache.query(new SqlFieldsQuery(sql));
>>
>> *Output:*
>> *javax.cache.CacheException*
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:817)
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:750)
>> at
>> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:424)
>> at
>> net.aline.cloudedh.base.database.IgniteTable._alterTable(IgniteTable.java:138)
>> at net.aline.cloudedh.base.database.BigTable.alter(BigTable.java:697)
>> at
>> net.aline.cloudedh.base.framework.DACEngine.alterTable(DACEngine.java:1015)
>> at
>> net.aline.cloudedh.base.framework.DACOperationsTest.main(DACOperationsTest.java:89)
>> *Caused by: class
>> org.apache.ignite.internal.processors.query.IgniteSQLException: null*
>> at
>> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommandH2(CommandProcessor.java:888)
>> at
>> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommand(CommandProcessor.java:418)
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeCommand(IgniteH2Indexing.java:1048)
>> at
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1130)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2406)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2402)
>> at
>> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2919)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2422)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2460)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2396)
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2323)
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:802)
>> ... 6 more
>> *Caused by: java.lang.NullPointerException*
>> *at
>> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommandH2(CommandProcessor.java:834)*
>> ... 18 more
>>
>>
>> On the other hand, ALTER TABLE ADD COLUMN functionality is working in
>> both GridGain and also through thick client Java code.
>> Why is it so? Is there any parameter or configurations that are missing?
>> Please kindly let me know if you need any more details regarding the
>> failure scenario.
>>
>>
>>
>>
>> Regards,
>>
>> Shravya Nethula,
>>
>> BigData Developer,
>>
>>
>> Hyderabad.
>>
>>


Re: Issue in Alter Table - Drop Column functionality

2020-09-03 Thread Denis Mekhanikov
Does the DROP TABLE statement work if you specify the schema name
explicitly?
For example, if the table is in the PUBLIC schema, try running the
following query: ALTER TABLE PUBLIC.person DROP COLUMN (age)

Denis

ср, 2 сент. 2020 г. в 20:27, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi,
>
> When I am trying the following query from GridGain, it works as expected.
> *ALTER TABLE person DROP COLUMN (age)*
>
> But when I try to execute the same query as a thick client with following
> Java code, its throwing * IgniteSQLException.*
> *Java Code:*
> String sql = "*ALTER TABLE person DROP COLUMN (age)*";
> FieldsQueryCursor cursor = cache.query(new SqlFieldsQuery(sql));
>
> *Output:*
> *javax.cache.CacheException*
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:817)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:750)
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:424)
> at
> net.aline.cloudedh.base.database.IgniteTable._alterTable(IgniteTable.java:138)
> at net.aline.cloudedh.base.database.BigTable.alter(BigTable.java:697)
> at
> net.aline.cloudedh.base.framework.DACEngine.alterTable(DACEngine.java:1015)
> at
> net.aline.cloudedh.base.framework.DACOperationsTest.main(DACOperationsTest.java:89)
> *Caused by: class
> org.apache.ignite.internal.processors.query.IgniteSQLException: null*
> at
> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommandH2(CommandProcessor.java:888)
> at
> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommand(CommandProcessor.java:418)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeCommand(IgniteH2Indexing.java:1048)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1130)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2406)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$3.applyx(GridQueryProcessor.java:2402)
> at
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2919)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2422)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2460)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2396)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2323)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:802)
> ... 6 more
> *Caused by: java.lang.NullPointerException*
> *at
> org.apache.ignite.internal.processors.query.h2.CommandProcessor.runCommandH2(CommandProcessor.java:834)*
> ... 18 more
>
>
> On the other hand, ALTER TABLE ADD COLUMN functionality is working in both
> GridGain and also through thick client Java code.
> Why is it so? Is there any parameter or configurations that are missing?
> Please kindly let me know if you need any more details regarding the
> failure scenario.
>
>
>
>
> Regards,
>
> Shravya Nethula,
>
> BigData Developer,
>
>
> Hyderabad.
>
>


Re: [External]Re: unable to start node

2020-09-03 Thread Denis Mekhanikov
It seems that there is a String value in the CustomerCharsCache cache that
is being rebalanced to the new node, where it tries to be indexed.
Kamlesh, do you have an SQL table that stores data in
the CustomerCharsCache? Did you put any data to this cache using cache API
directly?

Denis

чт, 3 сент. 2020 г. в 02:17, Denis Magda :

> @Taras Ledkov , @Igor Seliverstov
> , @Yuriy Gerzhedovich
> , did you guys see this issue before? It
> looks like a bug in the SQL engine.
>
> Kamlesh, in the meantime, try to follow the procedure described in this
> JIRA ticket (you need to clean the index that we'll be rebuild on the
> startup):
> https://issues.apache.org/jira/browse/IGNITE-11252
>
> Let us know if it worked out for you. We'll document this procedure.
>
> -
> Denis
>
>
> On Wed, Sep 2, 2020 at 10:07 AM Kamlesh Joshi 
> wrote:
>
>>
>> Hi Tom,
>>
>> We are using java 1.8.151 and Ignite 2.7.6.
>>
>> Regards,
>> Kamlesh
>>
>>
>> --
>> *From:* Tom Black 
>> *Sent:* Wednesday, 2 September, 2020, 1:31 pm
>> *To:* user@ignite.apache.org
>> *Subject:* [External]Re: unable to start node
>>
>> The e-mail below is from an external source. Please do not open
>> attachments or click links from an unknown or suspicious origin.
>>
>> Kamlesh Joshi wrote:
>> > */[2020-08-28T19:05:03,296][ERROR][sys-#940187%EDIFReplica%][] JVM will
>> > be halted immediately due to the failure: [failureCtx=FailureContext
>> > [type=CRITICAL_ERROR, err=org.h2.message.DbException: General error:
>> > "class o.a.i.IgniteCheckedException: Unexpected binary object class
>> > [type=class java.lang.String]" [5-197]]]/*
>>
>> what ignite and java version are you using?
>>
>> regards.
>>
>>
>>
>> "*Confidentiality Warning*: This message and any attachments are
>> intended only for the use of the intended recipient(s), are confidential
>> and may be privileged. If you are not the intended recipient, you are
>> hereby notified that any review, re-transmission, conversion to hard copy,
>> copying, circulation or other use of this message and any attachments is
>> strictly prohibited. If you are not the intended recipient, please notify
>> the sender immediately by return email and delete this message and any
>> attachments from your system.
>>
>> *Virus Warning:* Although the company has taken reasonable precautions
>> to ensure no viruses are present in this email. The company cannot accept
>> responsibility for any loss or damage arising from the use of this email or
>> attachment."
>>
>


Re: Ignite Webconsole (console.gridgain.com) login not not working

2020-06-17 Thread Denis Mekhanikov
Tarun,

Thanks for the report!
We've had an issue with the service. It's back on now.

Denis

ср, 17 июн. 2020 г. в 16:17, tarunk :

> Hi All,
>
> I have been using ignite webconsole with my login hosted at
> https://console.gridgain.com/signin.
> I remember using it till ~a month back, while logging in today, am getting
> below error.
>
> 404 Not Found
> --
> nginx
>
> Has there been any changes to this ?
>
> Even trying to setup a new user giving same error, tried with
> chrome/chrome.
>
> Thanks,
> Tarun
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Server Node comes down when a thick client node comes down and an update is issued within the failuredetectiontimeout

2020-03-26 Thread Denis Mekhanikov
Please find my reply in the following thread:
http://apache-ignite-developers.2346864.n4.nabble.com/Server-Node-comes-down-with-err-Failed-to-notify-listener-GridDhtTxPrepareFuture-Error-td46413.html

Denis



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service grid webinar

2019-12-23 Thread Denis Mekhanikov
Prasad,

Sorry, I’ve only noticed your message just now.
You can find the recording of the webinar on YouTube: 
https://www.youtube.com/watch?v=MZzkkIHGMtI

Denis
On 7 Nov 2019, 21:05 +0300, Prasad Bhalerao , 
wrote:
> Can you please share the recording of webinar  in case  if you have it?
>
> > On Wed 6 Nov, 2019, 3:30 AM Denis Mekhanikov  > > Hi Igniters!
> > >
> > > I’ve been working on the Service Grid functionality in Apache Ignite for 
> > > a while, and at some point I've decided to make a webinar with a 
> > > high-level overview of this part of the project.
> > >
> > > If you want to learn more about services, look at some use-cases or just 
> > > ask a few questions somebody who takes part in the development, please 
> > > feel free to join the presentation on November 6th, at 10 AM PST.
> > >
> > > You can sign up by the following link: 
> > > https://www.gridgain.com/resources/webinars/best-practices-microservices-architecture-apache-ignite
> > >
> > > Denis


Service grid webinar

2019-11-05 Thread Denis Mekhanikov
Hi Igniters!

I’ve been working on the Service Grid functionality in Apache Ignite for a 
while, and at some point I've decided to make a webinar with a high-level 
overview of this part of the project.

If you want to learn more about services, look at some use-cases or just ask a 
few questions somebody who takes part in the development, please feel free to 
join the presentation on November 6th, at 10 AM PST.

You can sign up by the following link: 
https://www.gridgain.com/resources/webinars/best-practices-microservices-architecture-apache-ignite

Denis


RE: Issue querying id column only

2019-10-21 Thread Denis Mekhanikov
Kurt,

I tried reproducing this issue using your data, but I didn’t manage to make the 
query for id return only one entry.

What version of Ignite do you use? How did you insert the data? SQL or cache 
API?

Denis
On 17 Oct 2019, 15:21 +0300, Kurt Semba , wrote:
> Hi Denis,
>
> The cache was defined in the spring configuration as follows and not 
> generated through SQL DML:
>
>      class="org.apache.ignite.configuration.CacheConfiguration">
>     
>     
>     
>      class="javax.cache.configuration.FactoryBuilder$SingletonFactory">
>     
>      class="com.extremenetworks.ignite.store.DomainNodeStore" />
>     
>     
>     
>
>
>     
>     
>     java.lang.String
>     
> com.extremenetworks.ignite.model.CachedDomainNode
>     
>     
>     
>
> The annotated fields in CachedDomainNode:
>
>    @QuerySqlField(index = true)
>     private String id;
>
>     @QuerySqlField
>     private String name;
>
>     @QuerySqlField
>     private String ipAddress;
>
>     @QuerySqlField
>     private Long lastUpdate;
>
> @QuerySqlField
>     private int nodeType;
>
>
> 0: jdbc:ignite:thin://127.0.0.1> select * from account.cacheddomainnode;
> ++++++
> |   ID   |  NAME  |   
> IPADDRESS    |   LASTUPDATE   |    NODETYPE   
>  |
> ++++++
> | ---- | Test-XMC | 
> /127.0.0.1 | 1571185515950  | 0   
>    |
> | 99488ecd-4cae-4ecc-8306-c3b4215452c2 | XMC-Justice1   | 
> /10.51.102.191 | 1571278075994  | 0   
>    |
> | 99488ecd-4cae-4ecc-8306-c3b4215452c3 | XMC-Justice2   | 
> /10.51.102.192 | 1571278108222  | 0   
>    |
> ++++++
> 3 rows selected (0.04 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select id from account.cacheddomainnode;
> ++
> |   ID   |
> ++
> | ---- |
> ++
> 1 row selected (0.013 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select _key from account.cacheddomainnode;
> ++
> |  _KEY  |
> ++
> | ---- |
> | 99488ecd-4cae-4ecc-8306-c3b4215452c2 |
> | 99488ecd-4cae-4ecc-8306-c3b4215452c3 |
> ++
>
>
> From: Denis Mekhanikov 
> Sent: Thursday, October 17, 2019 11:52 AM
> To: user@ignite.apache.org
> Subject: Re: Issue querying id column only
>
> External Email: Use caution in opening links or attachments.
> Are you able to reproduce this issue using SQL only?
> Could you share the DDL, insert and select statements that lead to the 
> described issue?
>
> I tried the following queries, but they work as expected.
>
> CREATE TABLE people (id int PRIMARY key, first_name varchar, last_name 
> varchar);
>
> INSERT INTO people (id, first_name, last_name) VALUES (1, 'John', 'Doe');
> INSERT INTO people (id, first_name, last_name) VALUES (2, 'John', 'Foe');
>
> SELECT id FROM people;
>
> Denis
> On 17 Oct 2019, 09:02 +0300, Kurt Semba , wrote:
>
> > Hi all,
> >
> > Is it possible for a table through the SQL interface to only return some 
> > subset of data if querying against a specific column?
> >
> > e.g.
> >
> > We have a cache configuration defined based on Java SQL Query annotations 
> > that contains an id field and some other string fields. The value of the id 
> > field in all entries also matches the value of the cache entry key).
> >
> > The table contains 3 entries, however if I execute “select id from table” 
> > through sqlline, I only am able to see 1 entry. However, if I execute 
> > “select id, name from table”, I see all of them.
> >
> > Are there any steps I can take to better diagnose this?
> >
> > Thank you
> > Kurt
> >


Re: Node stopped.

2019-10-18 Thread Denis Mekhanikov
The following documentation page has some useful points on deployment in a 
virtualised environment: https://apacheignite.readme.io/docs/vmware-deployment

Denis
On 17 Oct 2019, 17:41 +0300, John Smith , wrote:
> Ok I have metribeat running on the VM hopefully I will see something...
>
> > On Thu, 17 Oct 2019 at 05:09, Denis Mekhanikov  
> > wrote:
> > > There are no long pauses in the GC logs, so it must be the whole VM pause.
> > >
> > > Denis
> > > On 16 Oct 2019, 23:07 +0300, John Smith , wrote:
> > > > Sorry here is the gc logs for all 3 machines: 
> > > > https://www.dropbox.com/s/chbbxigahd4v9di/gc-logs.zip?dl=0
> > > >
> > > > > On Wed, 16 Oct 2019 at 15:49, John Smith  
> > > > > wrote:
> > > > > > Hi, so it happened again here is my latest gc.log stats: 
> > > > > > https://gceasy.io/diamondgc-report.jsp?oTxnId_value=a215d573-d1cf-4d53-acf1-9001432bb28e
> > > > > >
> > > > > > Everything seems ok to me. I also have Elasticsearch Metricbeat 
> > > > > > running, the CPU usage looked normal at the time.
> > > > > >
> > > > > > > On Thu, 10 Oct 2019 at 13:05, Denis Mekhanikov 
> > > > > > >  wrote:
> > > > > > > > Unfortunately, I don’t.
> > > > > > > > You can ask the VM vendor or the cloud provider (if you use 
> > > > > > > > any) for a proper tooling or logs.
> > > > > > > > Make sure, that there is no such step in the VM’s lifecycle 
> > > > > > > > that makes it freeze for a minute.
> > > > > > > > Also make sure that the physical CPU is not overutilized and no 
> > > > > > > > VMs that run on it are starving.
> > > > > > > >
> > > > > > > > Denis
> > > > > > > > On 10 Oct 2019, 19:03 +0300, John Smith 
> > > > > > > > , wrote:
> > > > > > > > > Do you know of any good tools I can use to check the VM?
> > > > > > > > >
> > > > > > > > > > On Thu, 10 Oct 2019 at 11:38, Denis Mekhanikov 
> > > > > > > > > >  wrote:
> > > > > > > > > > > > Hi Dennis, so are you saying I should enable GC logs + 
> > > > > > > > > > > > the safe point logs as well?
> > > > > > > > > > >
> > > > > > > > > > > Having safepoint statistics in your GC logs may be 
> > > > > > > > > > > useful, so I recommend enabling them for troubleshooting 
> > > > > > > > > > > purposes.
> > > > > > > > > > > Check the lifecycle of your virtual machines. There is a 
> > > > > > > > > > > high chance that the whole machine is frozen, not just 
> > > > > > > > > > > the Ignite node.
> > > > > > > > > > >
> > > > > > > > > > > Denis
> > > > > > > > > > > On 10 Oct 2019, 18:25 +0300, John Smith 
> > > > > > > > > > > , wrote:
> > > > > > > > > > > > Hi Dennis, so are you saying I should enable GC logs + 
> > > > > > > > > > > > the safe point logs as well?
> > > > > > > > > > > >
> > > > > > > > > > > > > On Thu, 10 Oct 2019 at 11:22, John Smith 
> > > > > > > > > > > > >  wrote:
> > > > > > > > > > > > > > You are correct, it is running in a VM.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Thu, 10 Oct 2019 at 10:11, Denis Mekhanikov 
> > > > > > > > > > > > > > >  wrote:
> > > > > > > > > > > > > > > > Hi!
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > There are the following messages in the logs:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > [22:26:21,816][WARNING][jvm-pause-detector-worker][IgniteKernal%xx]
> &g

Re: Striim support for Ignite

2019-10-17 Thread Denis Mekhanikov
I’m not aware of any activity towards supporting Striim.
Try asking the same in the Striim community. Hopefully, somebody will help you 
with that there.

Denis
On 16 Oct 2019, 21:51 +0300, niamin , wrote:
> Is there any plan to add Ignite connector for Striim to facilitate CDC?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to find the ml.inference package

2019-10-17 Thread Denis Mekhanikov
This package hasn’t  been released yet.
It’s planned to be released in Apache Ignite 2.8. Here is the ticket under 
which this package was created: 
https://issues.apache.org/jira/browse/IGNITE-10234

Till then you can try using a nightly build: 
https://ignite.apache.org/download.cgi#nightly-builds
But note that nightly builds are much less stable than regular releases, so 
it’s not recommended to use them in production. They are for development only.

Denis
On 16 Oct 2019, 23:06 +0300, captcha , wrote:
> Hi here,
>
> I'm trying to use the ML inference API to do ML model predictions in Ignite.
> I included the org.apache.ignite:ignite-ml dependency in my project. I'm
> able to see most packages and methods in the dependency. However seems the
> inference APIs
> (https://github.com/apache/ignite/tree/master/modules/ml/src/main/java/org/apache/ignite/ml/inference)
> are not there. The version I use is 2.7.6.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue querying id column only

2019-10-17 Thread Denis Mekhanikov
Are you able to reproduce this issue using SQL only?
Could you share the DDL, insert and select statements that lead to the 
described issue?

I tried the following queries, but they work as expected.

CREATE TABLE people (id int PRIMARY key, first_name varchar, last_name varchar);

INSERT INTO people (id, first_name, last_name) VALUES (1, 'John', 'Doe');
INSERT INTO people (id, first_name, last_name) VALUES (2, 'John', 'Foe');

SELECT id FROM people;

Denis
On 17 Oct 2019, 09:02 +0300, Kurt Semba , wrote:
> Hi all,
>
> Is it possible for a table through the SQL interface to only return some 
> subset of data if querying against a specific column?
>
> e.g.
>
> We have a cache configuration defined based on Java SQL Query annotations 
> that contains an id field and some other string fields. The value of the id 
> field in all entries also matches the value of the cache entry key).
>
> The table contains 3 entries, however if I execute “select id from table” 
> through sqlline, I only am able to see 1 entry. However, if I execute “select 
> id, name from table”, I see all of them.
>
> Are there any steps I can take to better diagnose this?
>
> Thank you
> Kurt
>


Re: Ignite Node Connection Issue

2019-10-17 Thread Denis Mekhanikov
I answered to you on stackoverflow: 
https://stackoverflow.com/questions/58422096/failing-to-connect-apache-ignite-cluster
You need to use the port 10800 for thin clients instead of 47xxx

Denis
On 17 Oct 2019, 00:08 +0300, sri hari kali charan Tummala 
, wrote:
> I even tried with simple scala code still no luck!
>
> Error:-
> Failed to connect to any address from IP finder (will retry to join topology 
> every 2000 ms; change 'reconnectDelay' to configure the frequency of retries):
>
> package com.ignite.examples.igniteStartup
>
> import org.apache.ignite.Ignite
> import org.apache.ignite.IgniteCache
> import org.apache.ignite.Ignition
> import org.apache.ignite.configuration.IgniteConfiguration
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
> import org.apache.ignite.configuration.IgniteConfiguration
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
> import java.util.Arrays
> import java.util.List
> import scala.collection.JavaConversions._
>
> object IgniteClientConnection {
> def main(args: Array[String]): Unit = {
>
>val spi = new TcpDiscoverySpi
>val ipFinder = new TcpDiscoveryVmIpFinder
>val hostList: List[String] = 
> Arrays.asList(("ec2-100-25-173-220:47500..47509," +
>  "ec2-100-25-173-220.compute-1.amazonaws.com:47500..47509," +
>  "3.86.250.240:47500..47509," +
>  "172.31.81.211:47500..47509," +
>  "100.25.173.220:47500..47509").split(","): _*)
>
>ipFinder.setAddresses(hostList)
>spi.setIpFinder(ipFinder)
>val cfg = new IgniteConfiguration
>cfg.setDiscoverySpi(spi)
>cfg.setClientMode(true)
>cfg.setPeerClassLoadingEnabled(true)
>
>val ignite: Ignite = Ignition.start(cfg)
>Ignition.ignite().cache("test")
>//LOG.info(">>> cache acquired")
>
>
> }
>
> }
>
> > On Wed, Oct 16, 2019 at 3:41 PM sri hari kali charan Tummala 
> >  wrote:
> > > Hi All,
> > >
> > > I was able to create a working ignite cluster 
> > > (https://www.gridgain.com/docs/8.7.6//installation-guide/manual-install-on-ec2)
> > >  on AWS I opened all the ports in my security group but when I try to 
> > > connect ignite cluster from my Local PC Scala Code ignite fails with 
> > > below error.
> > >
> > > I am using example-default.xml file
> > > 
> > >
> > >
> > >
> > >
> > > > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> > >
> > >
> > >
> > >
> > >3.86.250.240:47100..47200
> > >3.84.154.193:47100..47200
> > >127.0.0.1:47100..47200
> > >
> > >
> > >
> > >
> > >
> > > 
> > >
> > > Failed to connect to any address from IP finder (make sure IP finder 
> > > addresses are correct and firewalls are disabled on all host machines)
> > >
> > > Error on Ignite Ec2 instance:-
> > > class org.apache.ignite.internal.util.nio.GridNioException: Invalid 
> > > message type: -4692
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2437)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2178)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1819)
> > >   at 
> > > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119)
> > >   at java.lang.Thread.run(Thread.java:748)
> > > Caused by: class org.apache.ignite.IgniteException: Invalid message type: 
> > > -4692
> > >   at 
> > > org.apache.ignite.internal.managers.communication.GridIoMessageFactory.create(GridIoMessageFactory.java:1151)
> > >   at 
> > > org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$6.create(TcpCommunicationSpi.java:2336)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridDirectParser.decode(GridDirectParser.java:80)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:113)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:108)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilter.onMessageReceived(GridConnectionBytesVerifyFilter.java:132)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:108)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3575)
> > >   at 
> > > org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:174)
> > >   at 
> > > 

Re: Node stopped.

2019-10-17 Thread Denis Mekhanikov
There are no long pauses in the GC logs, so it must be the whole VM pause.

Denis
On 16 Oct 2019, 23:07 +0300, John Smith , wrote:
> Sorry here is the gc logs for all 3 machines: 
> https://www.dropbox.com/s/chbbxigahd4v9di/gc-logs.zip?dl=0
>
> > On Wed, 16 Oct 2019 at 15:49, John Smith  wrote:
> > > Hi, so it happened again here is my latest gc.log stats: 
> > > https://gceasy.io/diamondgc-report.jsp?oTxnId_value=a215d573-d1cf-4d53-acf1-9001432bb28e
> > >
> > > Everything seems ok to me. I also have Elasticsearch Metricbeat running, 
> > > the CPU usage looked normal at the time.
> > >
> > > > On Thu, 10 Oct 2019 at 13:05, Denis Mekhanikov  
> > > > wrote:
> > > > > Unfortunately, I don’t.
> > > > > You can ask the VM vendor or the cloud provider (if you use any) for 
> > > > > a proper tooling or logs.
> > > > > Make sure, that there is no such step in the VM’s lifecycle that 
> > > > > makes it freeze for a minute.
> > > > > Also make sure that the physical CPU is not overutilized and no VMs 
> > > > > that run on it are starving.
> > > > >
> > > > > Denis
> > > > > On 10 Oct 2019, 19:03 +0300, John Smith , 
> > > > > wrote:
> > > > > > Do you know of any good tools I can use to check the VM?
> > > > > >
> > > > > > > On Thu, 10 Oct 2019 at 11:38, Denis Mekhanikov 
> > > > > > >  wrote:
> > > > > > > > > Hi Dennis, so are you saying I should enable GC logs + the 
> > > > > > > > > safe point logs as well?
> > > > > > > >
> > > > > > > > Having safepoint statistics in your GC logs may be useful, so I 
> > > > > > > > recommend enabling them for troubleshooting purposes.
> > > > > > > > Check the lifecycle of your virtual machines. There is a high 
> > > > > > > > chance that the whole machine is frozen, not just the Ignite 
> > > > > > > > node.
> > > > > > > >
> > > > > > > > Denis
> > > > > > > > On 10 Oct 2019, 18:25 +0300, John Smith 
> > > > > > > > , wrote:
> > > > > > > > > Hi Dennis, so are you saying I should enable GC logs + the 
> > > > > > > > > safe point logs as well?
> > > > > > > > >
> > > > > > > > > > On Thu, 10 Oct 2019 at 11:22, John Smith 
> > > > > > > > > >  wrote:
> > > > > > > > > > > You are correct, it is running in a VM.
> > > > > > > > > > >
> > > > > > > > > > > > On Thu, 10 Oct 2019 at 10:11, Denis Mekhanikov 
> > > > > > > > > > > >  wrote:
> > > > > > > > > > > > > Hi!
> > > > > > > > > > > > >
> > > > > > > > > > > > > There are the following messages in the logs:
> > > > > > > > > > > > >
> > > > > > > > > > > > > [22:26:21,816][WARNING][jvm-pause-detector-worker][IgniteKernal%xx]
> > > > > > > > > > > > >  Possible too long JVM pause: 55705 milliseconds.
> > > > > > > > > > > > > ...
> > > > > > > > > > > > > [22:26:21,847][SEVERE][ttl-cleanup-worker-#48%xx%][G]
> > > > > > > > > > > > >  Blocked system-critical thread has been detected. 
> > > > > > > > > > > > > This can lead to cluster-wide undefined behaviour 
> > > > > > > > > > > > > [threadName=partition-exchanger, blockedFor=57s]
> > > > > > > > > > > > >
> > > > > > > > > > > > > Looks like the JVM was paused for almost a minute. It 
> > > > > > > > > > > > > doesn’t seem to be caused by a garbage collection, 
> > > > > > > > > > > > > since there is no evidence of GC pressure in the GC 
> > > > > > > > > > > > > log. Usually such big pauses happen in virtualised 
> > > > > > > > > > > > > environments when backups are captured from machines 
&

Re: Cache Preload from database using IgniteDatastreamer

2019-10-15 Thread Denis Mekhanikov
It’s worth mentioning, that best performance is usually achieved when adding 
data to the same IgniteDataStreamer instance from multiple threads.

Denis
On 12 Oct 2019, 06:46 +0300, ravichandra , 
wrote:
> Hi Om Thacker,
>
> I am trying to implement / follow the same approach for my POC to understand
> how that can be applied to a real time scenario's.
> I googled but couldn't find a complete implementation in spring boot.
> Is it possible to share the source code location, it would help me a lot.
>
> Thanks,
> Ravichandra
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: [External]Re: Freeing up RAM/cache

2019-10-14 Thread Denis Mekhanikov
Expiry policy is a part of cache configuration that cannot be changed in 
runtime. So, in order to add expiry policy for an existing cache, you need to 
drop this cache and create it anew with changed configuration.
But expiry policy doesn’t seem to be useful in your situation anyway. It 
removes data from both persistence and memory.
If you don’t want to lose expired entries, you’ll need to tune the size of the 
data region and rely on page replacement or use 3rd party persistence with 
read-through mode enabled and configure an expiry policy.

Denis
On 14 Oct 2019, 13:30 +0300, Kamlesh Joshi , wrote:
> Thanks for the response Denis ! That answers my one question. Could you 
> please suggest on below question regarding expiration policy ?
>
> Thanks and Regards,
> Kamlesh Joshi
>
> From: Denis Mekhanikov 
> Sent: Thursday, October 10, 2019 10:31 PM
> To: user@ignite.apache.org; user@ignite.apache.org
> Subject: [External]Re: Freeing up RAM/cache
>
> The e-mail below is from an external source. Please do not open attachments 
> or click links from an unknown or suspicious origin.
>
>
> There is no manual way to evict data from memory.
> You can limit the size of your data region, so that if this limit is reached, 
> then some pages will be dropped from memory automatically and replaced with 
> new ones.
> This process is called page replacement. You can read about it here: 
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pagereplacement%28rotationwithdisk%29
>
> Denis
> On 10 Oct 2019, 18:41 +0300, Kamlesh Joshi , wrote:
>
> > Hi Igniters,
> >
> > Is there any way, to release main memory? I just want to remove few entries 
> > from RAM (and not from the persistence). If I need data again I should be 
> > able to retrieve it from persistence.
> >
> > I have already gone through the Expiration policy for the same. But, can we 
> > set Expiration policy at the runtime or do we need to re-create the caches?
> >
> > Thanks and Regards,
> > Kamlesh Joshi
> >
> >
> > "Confidentiality Warning: This message and any attachments are intended 
> > only for the use of the intended recipient(s), are confidential and may be 
> > privileged. If you are not the intended recipient, you are hereby notified 
> > that any review, re-transmission, conversion to hard copy, copying, 
> > circulation or other use of this message and any attachments is strictly 
> > prohibited. If you are not the intended recipient, please notify the sender 
> > immediately by return email and delete this message and any attachments 
> > from your system.
> > Virus Warning: Although the company has taken reasonable precautions to 
> > ensure no viruses are present in this email. The company cannot accept 
> > responsibility for any loss or damage arising from the use of this email or 
> > attachment."
>
> "Confidentiality Warning: This message and any attachments are intended only 
> for the use of the intended recipient(s), are confidential and may be 
> privileged. If you are not the intended recipient, you are hereby notified 
> that any review, re-transmission, conversion to hard copy, copying, 
> circulation or other use of this message and any attachments is strictly 
> prohibited. If you are not the intended recipient, please notify the sender 
> immediately by return email and delete this message and any attachments from 
> your system.
> Virus Warning: Although the company has taken reasonable precautions to 
> ensure no viruses are present in this email. The company cannot accept 
> responsibility for any loss or damage arising from the use of this email or 
> attachment."


Re: Fault tolerance with IgniteCallable and IgniteRunnable on user errors

2019-10-11 Thread Denis Mekhanikov
Prasad,

User errors are propagated to the calling site, so you can implement failover 
outside of the task’s logic.

Denis
On 11 Oct 2019, 17:33 +0300, Prasad Bhalerao , 
wrote:
> Hi,
>
> How to define custom failover behavior when executing IgniteRunnable or 
> IgniteCallable tasks executed with ignite.compute().affinityRun or 
> ignite.compute().affinityCall ?
>
> I want to retry the task on few user errors.
>
> Implementing additional interface ComputeTask along with IgniteRunnable is 
> not helping in this case.
>
> Thanks,
> Prasad


Re: Node stopped.

2019-10-10 Thread Denis Mekhanikov
Unfortunately, I don’t.
You can ask the VM vendor or the cloud provider (if you use any) for a proper 
tooling or logs.
Make sure, that there is no such step in the VM’s lifecycle that makes it 
freeze for a minute.
Also make sure that the physical CPU is not overutilized and no VMs that run on 
it are starving.

Denis
On 10 Oct 2019, 19:03 +0300, John Smith , wrote:
> Do you know of any good tools I can use to check the VM?
>
> > On Thu, 10 Oct 2019 at 11:38, Denis Mekhanikov  
> > wrote:
> > > > Hi Dennis, so are you saying I should enable GC logs + the safe point 
> > > > logs as well?
> > >
> > > Having safepoint statistics in your GC logs may be useful, so I recommend 
> > > enabling them for troubleshooting purposes.
> > > Check the lifecycle of your virtual machines. There is a high chance that 
> > > the whole machine is frozen, not just the Ignite node.
> > >
> > > Denis
> > > On 10 Oct 2019, 18:25 +0300, John Smith , wrote:
> > > > Hi Dennis, so are you saying I should enable GC logs + the safe point 
> > > > logs as well?
> > > >
> > > > > On Thu, 10 Oct 2019 at 11:22, John Smith  
> > > > > wrote:
> > > > > > You are correct, it is running in a VM.
> > > > > >
> > > > > > > On Thu, 10 Oct 2019 at 10:11, Denis Mekhanikov 
> > > > > > >  wrote:
> > > > > > > > Hi!
> > > > > > > >
> > > > > > > > There are the following messages in the logs:
> > > > > > > >
> > > > > > > > [22:26:21,816][WARNING][jvm-pause-detector-worker][IgniteKernal%xx]
> > > > > > > >  Possible too long JVM pause: 55705 milliseconds.
> > > > > > > > ...
> > > > > > > > [22:26:21,847][SEVERE][ttl-cleanup-worker-#48%xx%][G] 
> > > > > > > > Blocked system-critical thread has been detected. This can lead 
> > > > > > > > to cluster-wide undefined behaviour 
> > > > > > > > [threadName=partition-exchanger, blockedFor=57s]
> > > > > > > >
> > > > > > > > Looks like the JVM was paused for almost a minute. It doesn’t 
> > > > > > > > seem to be caused by a garbage collection, since there is no 
> > > > > > > > evidence of GC pressure in the GC log. Usually such big pauses 
> > > > > > > > happen in virtualised environments when backups are captured 
> > > > > > > > from machines or they just don’t have enough CPU time.
> > > > > > > >
> > > > > > > > Looking at safepoint statistics may also reveal some 
> > > > > > > > interesting details. You can learn about safepoints here: 
> > > > > > > > https://blog.gceasy.io/2016/12/22/total-time-for-which-application-threads-were-stopped/
> > > > > > > >
> > > > > > > > Denis
> > > > > > > > On 9 Oct 2019, 23:14 +0300, John Smith 
> > > > > > > > , wrote:
> > > > > > > > > So the error sais to set clientFailureDetectionTimeout=3
> > > > > > > > >
> > > > > > > > > 1- Do I put a higher value than 3?
> > > > > > > > > 2- Do I do it on the client or the server nodes or all nodes?
> > > > > > > > > 3- Also if a client is misbehaving why shutoff the server 
> > > > > > > > > node?
> > > > > > > > >
> > > > > > > > > > On Thu, 3 Oct 2019 at 21:02, John Smith 
> > > > > > > > > >  wrote:
> > > > > > > > > > > But if it's the client node that's failing why is the 
> > > > > > > > > > > server node stopping? I'm pretty sure we do verry simple 
> > > > > > > > > > > put and get operations. All the client nodes are started 
> > > > > > > > > > > as client=true
> > > > > > > > > > >
> > > > > > > > > > > > On Thu., Oct. 3, 2019, 4:18 p.m. Denis Magda, 
> > > > > > > > > > > >  wrote:
> > > > > > > > > > > > > Hi John,
> > > > > > > > > > > > >
> > > > > > > >

Re: Freeing up RAM/cache

2019-10-10 Thread Denis Mekhanikov
There is no manual way to evict data from memory.
You can limit the size of your data region, so that if this limit is reached, 
then some pages will be dropped from memory automatically and replaced with new 
ones.
This process is called page replacement. You can read about it here: 
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood#IgniteDurableMemory-underthehood-Pagereplacement%28rotationwithdisk%29

Denis
On 10 Oct 2019, 18:41 +0300, Kamlesh Joshi , wrote:
> Hi Igniters,
>
> Is there any way, to release main memory? I just want to remove few entries 
> from RAM (and not from the persistence). If I need data again I should be 
> able to retrieve it from persistence.
>
> I have already gone through the Expiration policy for the same. But, can we 
> set Expiration policy at the runtime or do we need to re-create the caches?
>
> Thanks and Regards,
> Kamlesh Joshi
>
>
> "Confidentiality Warning: This message and any attachments are intended only 
> for the use of the intended recipient(s), are confidential and may be 
> privileged. If you are not the intended recipient, you are hereby notified 
> that any review, re-transmission, conversion to hard copy, copying, 
> circulation or other use of this message and any attachments is strictly 
> prohibited. If you are not the intended recipient, please notify the sender 
> immediately by return email and delete this message and any attachments from 
> your system.
> Virus Warning: Although the company has taken reasonable precautions to 
> ensure no viruses are present in this email. The company cannot accept 
> responsibility for any loss or damage arising from the use of this email or 
> attachment."


Re: Node stopped.

2019-10-10 Thread Denis Mekhanikov
> Hi Dennis, so are you saying I should enable GC logs + the safe point logs as 
> well?

Having safepoint statistics in your GC logs may be useful, so I recommend 
enabling them for troubleshooting purposes.
Check the lifecycle of your virtual machines. There is a high chance that the 
whole machine is frozen, not just the Ignite node.

Denis
On 10 Oct 2019, 18:25 +0300, John Smith , wrote:
> Hi Dennis, so are you saying I should enable GC logs + the safe point logs as 
> well?
>
> > On Thu, 10 Oct 2019 at 11:22, John Smith  wrote:
> > > You are correct, it is running in a VM.
> > >
> > > > On Thu, 10 Oct 2019 at 10:11, Denis Mekhanikov  
> > > > wrote:
> > > > > Hi!
> > > > >
> > > > > There are the following messages in the logs:
> > > > >
> > > > > [22:26:21,816][WARNING][jvm-pause-detector-worker][IgniteKernal%xx]
> > > > >  Possible too long JVM pause: 55705 milliseconds.
> > > > > ...
> > > > > [22:26:21,847][SEVERE][ttl-cleanup-worker-#48%xx%][G] Blocked 
> > > > > system-critical thread has been detected. This can lead to 
> > > > > cluster-wide undefined behaviour [threadName=partition-exchanger, 
> > > > > blockedFor=57s]
> > > > >
> > > > > Looks like the JVM was paused for almost a minute. It doesn’t seem to 
> > > > > be caused by a garbage collection, since there is no evidence of GC 
> > > > > pressure in the GC log. Usually such big pauses happen in virtualised 
> > > > > environments when backups are captured from machines or they just 
> > > > > don’t have enough CPU time.
> > > > >
> > > > > Looking at safepoint statistics may also reveal some interesting 
> > > > > details. You can learn about safepoints here: 
> > > > > https://blog.gceasy.io/2016/12/22/total-time-for-which-application-threads-were-stopped/
> > > > >
> > > > > Denis
> > > > > On 9 Oct 2019, 23:14 +0300, John Smith , 
> > > > > wrote:
> > > > > > So the error sais to set clientFailureDetectionTimeout=3
> > > > > >
> > > > > > 1- Do I put a higher value than 3?
> > > > > > 2- Do I do it on the client or the server nodes or all nodes?
> > > > > > 3- Also if a client is misbehaving why shutoff the server node?
> > > > > >
> > > > > > > On Thu, 3 Oct 2019 at 21:02, John Smith  
> > > > > > > wrote:
> > > > > > > > But if it's the client node that's failing why is the server 
> > > > > > > > node stopping? I'm pretty sure we do verry simple put and get 
> > > > > > > > operations. All the client nodes are started as client=true
> > > > > > > >
> > > > > > > > > On Thu., Oct. 3, 2019, 4:18 p.m. Denis Magda, 
> > > > > > > > >  wrote:
> > > > > > > > > > Hi John,
> > > > > > > > > >
> > > > > > > > > > I don't see any GC pressure or STW pauses either. If not GC 
> > > > > > > > > > then it might have been caused by a network glitch or some 
> > > > > > > > > > long-running operation started by the app. These logs 
> > > > > > > > > > statement
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > [22:26:21,827][WARNING][tcp-disco-client-message-worker-#10%xx%][TcpDiscoverySpi]
> > > > > > > > > >  Client node considered as unreachable and will be dropped 
> > > > > > > > > > from cluster, because no metrics update messages received 
> > > > > > > > > > in interval: 
> > > > > > > > > > TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may 
> > > > > > > > > > be caused by network problems or long GC pause on client 
> > > > > > > > > > node, try to increase this parameter. 
> > > > > > > > > > [nodeId=b07182d0-bf70-4318-9fe3-d7d5228bd6ef, 
> > > > > > > > > > clientFailureDetectionTimeout=3]
> > > > > > > > > >
> > > > > > > > > > [22:26:21,839][WARNING][tcp-disco-client-message-worker-#12%xx%][TcpDi

Re: Apache Ignite Change data capture functionality

2019-10-10 Thread Denis Mekhanikov
Ravichandra,

There is no integration for Striim in Apache Ignite codebase, so you need to 
check with the Striim documentation if it’s possible to configure it for Ignite.
If you want to use Ignite as a target, then a JDBC adapter should work, if 
there is any available.
If Ignite should work as a source, then you’ll probably need to implement the 
adapter yourself by using continuous queries or cache events.

Denis
On 9 Oct 2019, 21:13 +0300, ravichandra , 
wrote:
> As part of change data capture functionality can apache ignite be integrated
> with Striim which is a real-time data integration software. I read that the
> same functionality can be achieved by integrating with Oracle Golden gate.
> But I am curious to know whether CDC is available via Striim.
>
> Thanks,
> Ravichandra
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node stopped.

2019-10-10 Thread Denis Mekhanikov
Hi!

There are the following messages in the logs:

[22:26:21,816][WARNING][jvm-pause-detector-worker][IgniteKernal%xx] 
Possible too long JVM pause: 55705 milliseconds.
...
[22:26:21,847][SEVERE][ttl-cleanup-worker-#48%xx%][G] Blocked 
system-critical thread has been detected. This can lead to cluster-wide 
undefined behaviour [threadName=partition-exchanger, blockedFor=57s]

Looks like the JVM was paused for almost a minute. It doesn’t seem to be caused 
by a garbage collection, since there is no evidence of GC pressure in the GC 
log. Usually such big pauses happen in virtualised environments when backups 
are captured from machines or they just don’t have enough CPU time.

Looking at safepoint statistics may also reveal some interesting details. You 
can learn about safepoints here: 
https://blog.gceasy.io/2016/12/22/total-time-for-which-application-threads-were-stopped/

Denis
On 9 Oct 2019, 23:14 +0300, John Smith , wrote:
> So the error sais to set clientFailureDetectionTimeout=3
>
> 1- Do I put a higher value than 3?
> 2- Do I do it on the client or the server nodes or all nodes?
> 3- Also if a client is misbehaving why shutoff the server node?
>
> > On Thu, 3 Oct 2019 at 21:02, John Smith  wrote:
> > > But if it's the client node that's failing why is the server node 
> > > stopping? I'm pretty sure we do verry simple put and get operations. All 
> > > the client nodes are started as client=true
> > >
> > > > On Thu., Oct. 3, 2019, 4:18 p.m. Denis Magda,  wrote:
> > > > > Hi John,
> > > > >
> > > > > I don't see any GC pressure or STW pauses either. If not GC then it 
> > > > > might have been caused by a network glitch or some long-running 
> > > > > operation started by the app. These logs statement
> > > > >
> > > > >
> > > > > [22:26:21,827][WARNING][tcp-disco-client-message-worker-#10%xx%][TcpDiscoverySpi]
> > > > >  Client node considered as unreachable and will be dropped from 
> > > > > cluster, because no metrics update messages received in interval: 
> > > > > TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may be caused 
> > > > > by network problems or long GC pause on client node, try to increase 
> > > > > this parameter. [nodeId=b07182d0-bf70-4318-9fe3-d7d5228bd6ef, 
> > > > > clientFailureDetectionTimeout=3]
> > > > >
> > > > > [22:26:21,839][WARNING][tcp-disco-client-message-worker-#12%xx%][TcpDiscoverySpi]
> > > > >  Client node considered as unreachable and will be dropped from 
> > > > > cluster, because no metrics update messages received in interval: 
> > > > > TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may be caused 
> > > > > by network problems or long GC pause on client node, try to increase 
> > > > > this parameter. [nodeId=302cff60-b88d-40da-9e12-b955e6bf973d, 
> > > > > clientFailureDetectionTimeout=3]
> > > > >
> > > > > [22:26:21,847][SEVERE][ttl-cleanup-worker-#48%xx%][G] Blocked 
> > > > > system-critical thread has been detected. This can lead to 
> > > > > cluster-wide undefined behaviour [threadName=partition-exchanger, 
> > > > > blockedFor=57s]
> > > > >
> > > > > 22:26:21,954][SEVERE][ttl-cleanup-worker-#48%xx%][] Critical 
> > > > > system error detected. Will be handled accordingly to configured 
> > > > > handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
> > > > > super=AbstractFailureHandler 
> > > > > [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], 
> > > > > failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class 
> > > > > o.a.i.IgniteException: GridWorker [name=partition-exchanger, 
> > > > > igniteInstanceName=xx, finished=false, 
> > > > > heartbeatTs=1568931981805]]]
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > -
> > > > > Denis
> > > > >
> > > > >
> > > > > > On Thu, Oct 3, 2019 at 11:50 AM John Smith  
> > > > > > wrote:
> > > > > > > So I have been monitoring my node and the same one seems to stop 
> > > > > > > once in a while.
> > > > > > >
> > > > > > > https://www.dropbox.com/s/7n5qfsl5uyi1obt/ignite-logs.zip?dl=0
> > > > > > >
> > > > > > > I have attached the GC logs and the ignite logs. From what I see 
> > > > > > > from gc.logs I don't see big pauses. I could be wrong.
> > > > > > >
> > > > > > > The machine is 16GB and I have the configs here: 
> > > > > > > https://www.dropbox.com/s/hkv38s3vce5a4sk/ignite-config.xml?dl=0
> > > > > > >
> > > > > > > Here are the JVM settings...
> > > > > > >
> > > > > > > if [ -z "$JVM_OPTS" ] ; then
> > > > > > >     JVM_OPTS="-Xms2g -Xmx2g -server -XX:MaxMetaspaceSize=256m"
> > > > > > > fi
> > > > > > >
> > > > > > > JVM_OPTS="$JVM_OPTS -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails 
> > > > > > > -Xloggc:/var/log/apache-ignite/gc.log"
> > > > > > >
> > > > > > > JVM_OPTS="${JVM_OPTS} -Xss16m"


Re: Ignite SQL table ALTER COLUMN and RENAME COLUMN

2019-10-10 Thread Denis Mekhanikov
Favas,

It’s possible to remove a column and add another one using ALTER COMAND SQL 
statement, but currently you can't change a column’s type.
Note, that removing a column and adding another one with the same name but with 
a different type can lead to data corruption.

Denis
On 10 Oct 2019, 09:51 +0300, Muhammed Favas 
, wrote:
> Hi,
>
> Is there a way in ignite to ALTER the column to change the data type/nullable 
> property and also RENAME of column?
>
> I saw in ignite documentation that it will add in upcoming releases. Which 
> release it is planning for?
>
> Regards,
> Favas
>


Re: Gracefully shutting down the data grid

2019-10-08 Thread Denis Mekhanikov
Shiva,

What version of Ignite do you use and do you have security configured in the 
cluster?

There was a bug in Ignite before version 2.7, that has similar symptoms: 
https://issues.apache.org/jira/browse/IGNITE-7624
It’s fixed under the following ticket: 
https://issues.apache.org/jira/browse/IGNITE-9535

Try updating to the latest version of Ignite and see if the issue is resolved 
there.

If this is not your case, then please collect thread dumps from all nodes and 
share them in this thread. Logs will also be useful.
Please don’t add it to the message body, use attachment.

Denis
On 30 Sep 2019, 17:49 +0300, Shiva Kumar , wrote:
> Hi all,
>
> I am trying to deactivate a cluster which is being connected with few clients 
> over JDBC.
> As part of these clients connections, it inserts some records to many tables 
> and runs some long-running queries.
> At this time I am trying to deactivate the cluster [basically trying to take 
> data backup, so before this, I need to de-activate the cluster] But 
> de-activation is hanging and control.sh not returning the control and hangs 
> infinitely.
> when I check the current cluster state with rest API calls it sometime it 
> returns saying cluster is inactive.
> After some time I am trying to activate the cluster but it returns this error:
>
> [root@ignite-test]# curl 
> "http://ignite-service-shiv.ignite.svc.cluster.local:8080/ignite?cmd=activate=ignite=ignite;
>   | jq
>   % Total    % Received % Xferd  Average Speed   Time    Time     Time  
> Current
>                                  Dload  Upload   Total   Spent    Left  Speed
> 100   207  100   207    0     0   2411      0 --:--:-- --:--:-- --:--:--  2406
> {
>   "successStatus": 0,
>   "sessionToken": "654F094484E24232AA74F35AC5E83481",
>   "error": "Failed to activate, because another state change operation is 
> currently in progress: deactivate\nsuppressed: \n",
>   "response": null
> }
>
>
> This means that my earlier de-activation has not succeeded properly.
> Is there any other way to de-activate the cluster or to terminate the 
> existing client connections or to terminate the running queries.
> I tried "kill -k -ar" from visor shell but it restarts few nodes and it ended 
> up with some exception related to page corruption.
> Note: My Ignite deployment is on Kubernetes
>
> Any help is appreciated.
>
> regards,
> shiva
>
>


Re: nodes are restarting when i try to drop a table created with persistence enabled

2019-09-25 Thread Denis Mekhanikov
I think, the issue is that Ignite can't recover from
IgniteOutOfMemory, even by removing data.
Shiva, did IgniteOutOfMemory occur for the first time when you did the
DROP TABLE, or before that?

Denis

ср, 25 сент. 2019 г. в 02:30, Denis Magda :
>
> Shiva,
>
> Does this issue still exist? Ignite Dev how do we debug this sort of thing?
>
> -
> Denis
>
>
> On Tue, Sep 17, 2019 at 7:22 AM Shiva Kumar  wrote:
>>
>> Hi dmagda,
>>
>> I am trying to drop the table which has around 10 million records and I am 
>> seeing "Out of memory in data region" error messages in Ignite logs and 
>> ignite node [Ignite pod on kubernetes] is restarting.
>> I have configured 3GB for default data region, 7GB for JVM and total 15GB 
>> for Ignite container and enabled native persistence.
>> Earlier I was in an impression that restart was caused by 
>> "SYSTEM_WORKER_BLOCKED" errors but now I am realized that  
>> "SYSTEM_WORKER_BLOCKED" is added to ignore failure list and the actual cause 
>> is " CRITICAL_ERROR " due to  "Out of memory in data region"
>>
>> This is the error messages in logs:
>>
>> ""[2019-09-17T08:25:35,054][ERROR][sys-#773][] JVM will be halted 
>> immediately due to the failure: [failureCtx=FailureContext 
>> [type=CRITICAL_ERROR, err=class o.a.i.i.mem.IgniteOutOfMemoryException: 
>> Failed to find a page for eviction [segmentCapacity=971652, loaded=381157, 
>> maxDirtyPages=285868, dirtyPages=381157, cpPages=0, pinnedInSegment=3, 
>> failedToPrepare=381155]
>> Out of memory in data region [name=Default_Region, initSize=500.0 MiB, 
>> maxSize=3.0 GiB, persistenceEnabled=true] Try the following:
>>   ^-- Increase maximum off-heap memory size (DataRegionConfiguration.maxSize)
>>   ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
>>   ^-- Enable eviction or expiration policies]]
>>
>> Could you please help me on why drop table operation causing  "Out of memory 
>> in data region"? and how I can avoid it?
>>
>> We have a use case where application inserts records to many tables in 
>> Ignite simultaneously for some time period and other applications run a 
>> query on that time period data and update the dashboard. we need to delete 
>> the records inserted in the previous time period before inserting new 
>> records.
>>
>> even during delete from table operation, I have seen:
>>
>> "Critical system error detected. Will be handled accordingly to configured 
>> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], 
>> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
>> o.a.i.IgniteException: Checkpoint read lock acquisition has been timed 
>> out.]] class org.apache.ignite.IgniteException: Checkpoint read lock 
>> acquisition has been timed out.|
>>
>>
>>
>> On Mon, Apr 29, 2019 at 12:17 PM Denis Magda  wrote:
>>>
>>> Hi Shiva,
>>>
>>> That was designed to prevent global cluster performance degradation or 
>>> other outages. Have you tried to apply my recommendation of turning of the 
>>> failure handler for this system threads?
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Sun, Apr 28, 2019 at 10:28 AM shivakumar  
>>> wrote:

 HI Denis,

 is there any specific reason for the blocking of critical thread, like CPU
 is full or Heap is full ?
 We are again and again hitting this issue.
 is there any other way to drop tables/cache ?
 This looks like a critical issue.

 regards,
 shiva



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignition Start - Timeout if connection is unsuccessful

2019-09-11 Thread Denis Mekhanikov
Mahesh,

There is a TcpDiscoverySpi property, that defines this behaviour:
TcpDiscoverySpi#joinTimeout [1]. So, instead of calling
Ignition.start() with a special property, you can specify this timeout
in the configuration.
If a join attempt is failed and joinTimeout is already exceeded, then
the client will stop trying.
Note, that this timeout is not strict, and the connection may take
longer before it fails in case if TcpDiscoverySpi#reconnectDelay [2]
is set or each connection attempt takes a long time.

[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setJoinTimeout-long-
[2] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setReconnectDelay-int-

Denis

ср, 11 сент. 2019 г. в 09:30, Mahesh Renduchintala
:
>
> Hello
>
> We are currently using Ignition.Start to get the handle for thick client.
>
> >> ignite = Ignition.start(cfg);
>
> As I understand, this API is a blocking API, unless the connection is 
> successfully established.
>
> However, in some scenarios, where the thick client is unable to connect 
> properly, it is preferable to have a timeout option as specified below.
> >> ignite = Ignition.start(cfg,  timeout);
>
> is this already available today? If not, can you take it as an enhancement 
> request for 2.8.
>
> The reason why I ask is, in some scenarios, when a thick client comes up for 
> the very first time, we see thick client making an attempt to connect to 
> ignite servers almost in an infinite loop.
> Previously, I raised this infinite loop connecting issue before.
> http://apache-ignite-users.70518.x6.nabble.com/client-reconnect-working-td28570.html
>
> regards
> mahesh
>
>
>
>


Re: Using Persistent ignite queues

2019-09-03 Thread Denis Mekhanikov
Hi!

IgniteQueue is stored in the atomics cache, which is called 
ignite-sys-atomic-cache@default-ds-group by default.
This cache is stored in the default data region, so in order to make it 
persisted, you need to make the default data region persisted using 
DataStorageConfiguration#defaultDataRegionConfiguration property.

The issue with the deadlock seems similar to the following one: 
https://issues.apache.org/jira/browse/IGNITE-10250

You can try a nightly build, where this issue is fixed: 
https://ci.ignite.apache.org/viewLog.html?buildId=lastSuccessful=Releases_NightlyRelease_RunApacheIgniteNightlyRelease=artifacts=1

Denis
On 10 Aug 2018, 13:32 +0300, dkol , wrote:
> Hi arunkjn
>
> were you able to resolve this issue ?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception handling for asynchronous backup

2019-08-30 Thread Denis Mekhanikov
Hi!

If the cache is transactional, then no inconsistencies are possible, since 
two-phase commit guarantees, that all nodes have data records of the same 
version.

In case of an atomic cache, primary node failure can indeed lead to an 
inconsistency between different versions of the same partitions.

There is a tool called idle_verify, that can validate consistency of data 
between nodes: 
https://apacheignite-tools.readme.io/docs/control-script#section-verification-of-partition-checksums
You can run it to find copies of the same partition with different state. After 
that restarting the problematic node or iterating through all entries in the 
partitions and setting them again will fix the consistency.
In case of enabled persistence you will need to remove problematic partitions 
from disk. If you leave one copy, that you believe is valid, then it will be 
rebalanced to other nodes when they are started again.

Denis
On 30 Aug 2019, 04:42 +0300, liyuj <18624049...@163.com>, wrote:
> Hi community,
>
> In the case of CacheWriteSynchronizationMode being asynchronous, if the
> asynchronous writing of data fails, leading to inconsistency between
> primary and backup data, what is the subsequent processing?
>


Re: Replicated cache partition size

2019-08-26 Thread Denis Mekhanikov
Niels,

I believe, that the reason is performance of an affinity function and a size of 
GridDhtPartitionsFullMessage.
An affinity function needs to assign partitions to nodes. In case of a 
replicated cache, there are (number of partitions) x (number of nodes) pairs of 
(node, partition) that need to be calculated. It was especially critical in 
Ignite 1.x when FairAffinityFunction was used, since its computational 
performance was worse than of Rendezvous.

I think, it was decided to decrease the number of partitions for replicated 
caches to make the affinity function work faster and decrease the size of the 
partition map.

For PARTITIONED caches it’s important to keep an even number of partitions on 
every node. Rendezvous works as a pseudo-random, so it needs a high number of 
partitions to give a fair distribution of partitions.
For REPLICATED caches it’s not that critical, since the number of partitions is 
equal on every node anyway.

Denis
On 26 Aug 2019, 14:34 +0300, Niels Ejrnæs , wrote:
> Hi,
>
> Is there a particular reason for why replicated caches has their partition 
> size set to 512 by default?
> I found this in 
> org.apache.ignite.internal.processors.cache.GridCacheUtils#initializeConfigDefaults(IgniteLogger,
>  CacheConfiguration, CacheObjectContext):V
>
>     if (cfg.getAffinity() == null) {
>       ...
>     else if (cfg.getCacheMode() == REPLICATED) {
>     RendezvousAffinityFunction aff = new 
> RendezvousAffinityFunction(false, 512);
>
>     cfg.setAffinity(aff);
>
>     cfg.setBackups(Integer.MAX_VALUE);
>     }
>
> The default partition size for the RendezvousAffinityFunction is 1024.
>
> Best regards
> Niels Elkjær Ejrnæs
>


Re: IgniteCache.destroy() taking long time

2019-08-26 Thread Denis Mekhanikov
Partition map exchange is an absolutely necessary procedure, that cannot be 
disabled. Functionality of all caches depend on it.

I checked, and a cache destruction is performed as a part of a partition map 
exchange, and not the opposite. If you see, that nodes join the cluster fast, 
but cache destruction is slow, then exchange is not the reason of the slowness, 
since PME also happens every time when nodes join and leave the cluster.

How much data do you have in the cache? Cache destruction requires iteration 
over the whole dataset, so it may take quite a while.

Denis
On 20 Aug 2019, 08:47 +0300, Shravya Nethula 
, wrote:
> Hi Denis,
>
> I have enabled the INFO logs and tried to delete a cache. Please find the 
> logs below:
>
> Aug 20, 2019 11:00:17 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Stopped cache [cacheName=Person]
> cache deleted in:17280 ms >>> This is a print statement from our code
> Aug 20, 2019 11:00:17 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Completed partition exchange 
> [localNode=90c11807-ac10-43a6-b7b6-605d5c07314d, 
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion 
> [topVer=5891, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT, 
> evtNode=TcpDiscoveryNode [id=90c11807-ac10-43a6-b7b6-605d5c07314d, 
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.1.116, 192.168.1.249], 
> sockAddrs=[device-shravya-hp/192.168.1.249:0, /0:0:0:0:0:0:0:1%lo:0, 
> /127.0.0.1:0, /192.168.1.116:0], discPort=0, order=5890, intOrder=0, 
> lastExchangeTime=1566278969558, loc=true, ver=2.7.0#20181201-sha1:256ae401, 
> isClient=true], done=true], topVer=AffinityTopologyVersion [topVer=5891, 
> minorTopVer=2], durationFromInit=16882]
> Aug 20, 2019 11:00:17 AM org.apache.ignite.logger.java.JavaLogger info
>
> It looks like partition exchange is taking more time. How can this be avoided?
> What is the importance of Partition exchange?
>
>
> Regards,
> Shravya Nethula.
>
> From: Shravya Nethula 
> Sent: Monday, August 19, 2019 6:06:36 PM
> To: user@ignite.apache.org 
> Subject: Re: IgniteCache.destroy() taking long time
>
>
> The following is the cache configuration:
>
> CacheConfiguration cacheCfg = new CacheConfiguration(tableName);
>     cacheCfg.setCacheMode(CacheMode.PARTITIONED);
>     cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>     cacheCfg.setBackups(1);
>     
> cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC);
>     cacheCfg.setRebalanceBatchSize(1024 * 1024 * 4);
>     cacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>     cacheCfg.setStatisticsEnabled(true);
>     cacheCfg.setRebalanceDelay(100);
>     cacheCfg.setDefaultLockTimeout(5000);
>     cacheCfg.setReadFromBackup(true);
>     cacheCfg.setQueryParallelism(16);
>     cacheCfg.setRebalanceBatchesPrefetchCount(4);
>     cacheCfg.setNodeFilter(new AttributeNodeFilter("ROLE", "data.compute"));
>
> There are 2 nodes in our cluster.
>
> From: Alexander Kor 
> Sent: Wednesday, August 14, 2019 8:41:31 PM
> To: user@ignite.apache.org 
> Subject: Re: IgniteCache.destroy() taking long time
>
> Hi,
>     Can you please share your cache configuration.  How many nodes do you 
> have in your cluster?
>     If you are running in PARTITONED mode then some exchange of information 
> will occur.
>     More details here: https://apacheignite.readme.io/docs/cache-modes
>     Do you have a reproducer project?
> Thanks, Alex
>
>
> On Wed, Aug 14, 2019 at 1:22 AM Shravya Nethula 
>  wrote:
> > Hi,
> >
> > I have created a cache using the following API:
> > IgniteCache cache = (IgniteCache) 
> > ignite.getOrCreateCache(cacheCfg);
> >
> > Now when i try to delete the cache using IgniteCache.destroy() API, it is 
> > taking about 12-13 seconds.
> >
> > Why is it taking more execution time? Will there be any exchange of cache 
> > information among the nodes whenever a cache is deleted?
> > Is there any way in which, the execution time can be optimized?
> >
> > Regards,
> > Shravya Nethula.
> >
> >
> > Regards,
> > Shravya Nethula,
> > BigData Developer,
> >
> > Hyderabad.


Re: Capacity planning for production deployment on kubernetes

2019-08-23 Thread Denis Mekhanikov
Shiva,

What version of Ignite do you use?
Before version 2.7 Ignite used a different mechanism to limit the size of the 
WAL history. It used DataStorageConfiguration#walHistorySize property, which is 
currently deprecated. This is what’s explained on the internal documentation 
page.

In Ignite versions starting from 2.7 DataStorageConfiguration#maxWalArchiveSize 
property is used. It specifies the maximum size of WAL history in bytes.
As you said, it’s 4 times the checkpoint buffer size. Since you didn’t change 
the size of the checkpoint buffer, by default it’s 1 GB in your case (rules for 
its calculation can be found here: 
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size)
So, maximum WAL history size is 4 GB.
Plus you need to add WAL size itself which is (64 MB per segment) x (10 
segments) = 640 megabytes by default.
If you use Ignite 2.7 or newer, then 10 GB should be enough for WAL with 
archive. Failures should be investigated in this case.

But Ignite 2.6 and older use a different approach, so there is no strict 
limitation on the WAL archive size.

Denis
On 22 Aug 2019, 22:35 +0300, Denis Magda , wrote:
> Please share the whole log file. It might be the case that something goes 
> wrong with volumes you attached to Ignite pods.
>
> -
> Denis
>
>
> > On Thu, Aug 22, 2019 at 8:07 AM Shiva Kumar  
> > wrote:
> > > Hi Denis,
> > >
> > > Thanks for your response,
> > > yes in our test also we have seen OOM errors and pod crash.
> > > so we will follow the recommendation for RAM requirements and also I was 
> > > checking to ignite documentation on disk space required for WAL + WAL 
> > > archive.
> > > here in this link  
> > > https://apacheignite.readme.io/docs/write-ahead-log#section-wal-archive
> > >
> > > it says: archive size is defined as 4 times the size of the checkpointing 
> > > buffer and checkpointing buffer is a function of the data region 
> > > (https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size)
> > >
> > > but in this link 
> > > https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-SubfoldersGeneration
> > >
> > > under Estimating disk space section it explains something to estimate 
> > > disk space required for WAL but it is not clear, can you please help me 
> > > the correct recommendation for calculating the disk space required for 
> > > WAL+WAL archive.
> > >
> > > In one of my testing, I configured 4GB for data region and 10GB for 
> > > WAL+WAL archive but our pods crashing as disk mounted for WAL+WAL archive 
> > > runs out of space.
> > >
> > > [ignite@ignite-cluster-ignite-node-2 ignite]$ df -h
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > overlay         158G   39G  112G  26% /
> > > tmpfs            63G     0   63G   0% /dev
> > > tmpfs            63G     0   63G   0% /sys/fs/cgroup
> > > /dev/vda1       158G   39G  112G  26% /etc/hosts
> > > shm              64M     0   64M   0% /dev/shm
> > > /dev/vdq        9.8G  9.7G   44M 100% /opt/ignite/wal
> > > /dev/vdr         50G  1.4G   48G   3% /opt/ignite/persistence
> > > tmpfs            63G   12K   63G   1% 
> > > /run/secrets/kubernetes.io/serviceaccount
> > > tmpfs            63G     0   63G   0% /proc/acpi
> > > tmpfs            63G     0   63G   0% /proc/scsi
> > > tmpfs            63G     0   63G   0% /sys/firmware
> > >
> > >
> > > and this is the error message in ignite node:
> > >
> > > "ERROR","JVM will be halted immediately due to the failure: 
> > > [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class 
> > > o.a.i.IgniteCheckedException: Failed to archive WAL segment 
> > > [srcFile=/opt/ignite/wal/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0006.wal,
> > >  
> > > dstFile=/opt/ignite/wal/archive/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0236.wal.tmp]]]"
> > >
> > > > On Thu, Aug 22, 2019 at 8:04 PM Denis Mekhanikov 
> > > >  wrote:
> > > > > Shivakumar,
> > > > >
> > > > > Such allocation doesn’t allow full memory utilization, so it’s 
> > > > > possible, that nodes will crash because of out of memory errors.
> > > > > So, it’s better to follow the given recommendation.
> > > > >
> > > > > If you want us to investigate reasons of the failures, please provid

Re: Capacity planning for production deployment on kubernetes

2019-08-22 Thread Denis Mekhanikov
Shivakumar,

Such allocation doesn’t allow full memory utilization, so it’s possible, that 
nodes will crash because of out of memory errors.
So, it’s better to follow the given recommendation.

If you want us to investigate reasons of the failures, please provide logs and 
configuration of the failed nodes.

Denis
On 21 Aug 2019, 16:17 +0300, Shiva Kumar , wrote:
> Hi all,
> we are testing field use case before deploying in the field and we want to 
> know whether below resource limits are suitable in production.
> There are 3 nodes (3 pods on kubernetes) running. Each having below 
> configuration
>
>                            DefaultDataRegion: 60GB
>                                                 JVM: 32GB
> Resource allocated for each container: 64GB
>
> And ignite documents says (JVM+ All DataRegion) should not exceed 70% of 
> total RAM allocated to each node(container).
> but we started testing with the above configuration and up to 9 days ignite 
> cluster was running successfully and there was some data ingestion but 
> suddenly pods crashed and they were unable to recover from the crash.
> does the above resource configuration not good for node recovery??


Re: Does IgniteCache.containsKey lock the key in a Transaction?

2019-08-22 Thread Denis Mekhanikov
Yohan,

IgniteCache#containsKey(...) 

 locks a key under pessimistic transactions with REPEATABLE_READ isolation 
level, just like a get().
And it doesn’t make servers send values back to a requesting node, so basically 
it does what you need.

Denis

> On 19 Aug 2019, at 14:08, Yohan Fernando  wrote:
> 
> Hi Nattapon,
>  
> Unfortunately explicit locks cannot be used within transactions and an 
> exception will be thrown.
>  
> It seems the only way is to rely on implicit locks using calls like get() and 
> containsKey(). I looked through the ignite source for these methods and it 
> does appear like containsKey delegates to the same call as get() but has a 
> flag about whether to serialize or not so I assume that containsKey might 
> avoid serialization. However I’m not an expert on the Ignite codebase so it 
> would be good if someone can confirm that this is indeed the case.
>  
> Thanks
>  
> Yohan
>  
> From: nattapon mailto:wrong...@gmail.com>> 
> Sent: 19 August 2019 08:00
> To: user@ignite.apache.org 
> Subject: Re: Does IgniteCache.containsKey lock the key in a Transaction?
>  
> Caution: This email originated from outside of Tudor.
>  
> Hi Yohan,
>  
> There is IgniteCache.lock(key) method described in 
> https://apacheignite.readme.io/docs/distributed-locks 
> 
>  . Is it suited your requirement?
>  
> IgniteCache cache = ignite.cache("myCache");
> 
> // Create a lock for the given key
> Lock lock = cache.lock("keyLock");
> try {
> // Acquire the lock
> lock.lock();
>   
> cache.put("Hello", 11);
> cache.put("World", 22);
> }
> finally {
> // Release the lock
> lock.unlock();
> }
>  
> Regards,
> Nattapon
>  
> On Fri, Aug 16, 2019 at 5:23 PM Yohan Fernando  > wrote:
> Hi All, Does  IgniteCache.containsKey() lock the key in a Transaction similar 
> to IgniteCache.get() ? Basically I want a lightweight call to lock the key 
> without having to Serialize objects from each node within a Transaction. 
>  
>  
> _
> 
> This email, its contents, and any attachments transmitted with it are 
> intended only for the addressee(s) and may be confidential and legally 
> privileged. We do not waive any confidentiality by misdelivery. If you have 
> received this email in error, please notify the sender immediately and delete 
> it. You should not copy it, forward it or otherwise use the contents, 
> attachments or information in any way. Any liability for viruses is excluded 
> to the fullest extent permitted by law.
> 
> Tudor Capital Europe LLP (TCE) is authorised and regulated by The Financial 
> Conduct Authority (the FCA). TCE is registered as a limited liability 
> partnership in England and Wales No: OC340673 with its registered office at 
> 10 New Burlington Street, London, W1S 3BE, United Kingdom
> 
> _
> 
> This email, its contents, and any attachments transmitted with it are 
> intended only for the addressee(s) and may be confidential and legally 
> privileged. We do not waive any confidentiality by misdelivery. If you have 
> received this email in error, please notify the sender immediately and delete 
> it. You should not copy it, forward it or otherwise use the contents, 
> attachments or information in any way. Any liability for viruses is excluded 
> to the fullest extent permitted by law.
> 
> Tudor Capital Europe LLP (TCE) is authorised and regulated by The Financial 
> Conduct Authority (the FCA). TCE is registered as a limited liability 
> partnership in England and Wales No: OC340673 with its registered office at 
> 10 New Burlington Street, London, W1S 3BE, United Kingdom
> 



Re: Cache spreading to new nodes

2019-08-22 Thread Denis Mekhanikov
Marco,

IgniteCache.localEntries
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#localEntries-org.apache.ignite.cache.CachePeekMode...->.iterator()
will iterate over all entries in the cache on a local node. So, it doesn't
iterate over caches, but over entries in one cache instead.
It brings entries from off-heap to heap, so data is duplicated during
iteration. But no “local cache” is created. Entries are just brought to
heap which can be heavy for a garbage collector.

> Yes, I read that I should have set the attributes. However, now it feels
like an unnecessary step? What would that improve, in my case?

Node filters should be stateless and return the same entries on all nodes.
So, make sure, that it’s impossible that this node filter acts differently
on different nodes.
Using an attribute-based node filter is a safe way to choose nodes for
caches since such filter is guaranteed to work identically on every node.

> I have just one question: you called it "backup filter". Is the
nodeFilter a filter for only backup nodes or was that a typo? I thought it
was a filter for all the nodes for a cache.

Backup filter and node filter are different things.
The one that you specify using CacheConfiguration#setNodeFilter()
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#setNodeFilter-org.apache.ignite.lang.IgnitePredicate->
is
used to choose nodes, where a cache should be stored.

On the other hand, backupFilter is a property of RendezvousAffinityFunction
<https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html>.
It can be used to choose where backup partitions should be stored based on
a location of a primary partition. A possible use-case for it is making
primary and backup partitions be stored on different racks in a datacenter.
As far as I can see, you don’t need this one.

Denis

On 15 Aug 2019, at 10:05, Marco Bernagozzi 
wrote:

Hi,
Sorry, tearing down the project to make a runnable proved to be a much
bigger project than expected. I eventually managed, and the outcome is:
I used to call:
List cacheNames = new ArrayList<>();
ignite.cacheNames().forEach(
n -> {
if (!n.equals("settingsCache")) {

ignite.cache(n).localEntries(CachePeekMode.ALL).iterator().forEachRemaining(a
-> cacheNames.add(a.getKey().toString()));
}
}
);
to check the local caches, which apparently creates a local copy of the
cache in the machine (!?).
Now, I replaced it with:
List cacheNames = new ArrayList<>();
UUID localId = ignite.cluster().localNode().id();
ignite.cacheNames().forEach(
cache -> {
if (!cache.equals("settingsCache")) {
boolean containsCache =
ignite.cluster().forCacheNodes(cache).nodes().stream()
.anyMatch(n -> n.id().equals(localId));
if (containsCache) {
cacheNames.add(cache);
}
}
}
);

And the issue disapeared. Is this an intended behaviour? Because it looks
weird to me.

To reply to:
"I think, it’s better not to set it, because otherwise if you don’t trigger
the rebalance, then only one node will store the cache."
With the configuration I posted you, the cache is spread out to the
machines that I use in the setNodeFilter().

 Yes, I believe you're correct with the NodeFilter. It should be pointless
to have now, right? That was me experimenting and trying to figure out why
was the cache spreading to new nodes.

fetchNodes() fetches the ids of the local node and the k most empty nodes (
where k is given as an input for each cache). I check how full a node is
based on the code right above, in which I check how many caches a node has.

Yes, I read that I should have set the attributes. However, now it feels
like an unnecessary step? What would that improve, in my case?

 And yes, it makes sense now! Thanks for the clarification. I thought that
the rebalancing was rebalancing something in an uncontrolled way, but turns
out everything was due to my
ignite.cache(n).localEntries(CachePeekMode.ALL) creating a local cache.

I have just one question: you called it "backup filter". Is the nodeFilter
a filter for only backup nodes or was that a typo? I thought it was a
filter for all the nodes for a cache.

On Wed, 14 Aug 2019 at 17:58, Denis Mekhanikov 
wrote:

> Marco,
>
> Rebalance mode set to NONE means that your cache won’t be rebalanced at
> all unless you trigger it manually.
> I think, it’s better not to set it, because otherwise if you don’t trigger
> the rebalance, then only one node will store the cache.
>
> Also the backup filter specified in the affinity function doesn’t seem

Re: Cache spreading to new nodes

2019-08-14 Thread Denis Mekhanikov
Marco,

Rebalance mode set to NONE means that your cache won’t be rebalanced at all 
unless you trigger it manually.
I think, it’s better not to set it, because otherwise if you don’t trigger the 
rebalance, then only one node will store the cache.

Also the backup filter specified in the affinity function doesn’t seem correct 
to me. It’s always true, since your node filter accepts only those nodes, that 
are in the nodesForOptimization list.

What does fetchNodes() method do?
The recommended way to implement node filters is to check custom node’s 
attributes using an AttributeNodeFilter 
.

Partition map exchange is a process that happens after every topology change. 
Nodes exchange information about partitions distribution of caches. So, you 
can’t prevent it from happening.
The message, that you see is a symptom and not a cause.

Denis


> On 13 Aug 2019, at 09:50, Marco Bernagozzi  wrote:
> 
> Hi, I did some more digging and discovered that the issue seems to be: 
> 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture:
>  Completed partition exchange 
> 
> Is there any way to disable or limit the partition exchange? 
> 
> Best, 
> Marco 
> 
> On Mon, 12 Aug 2019 at 16:59, Andrei Aleksandrov  
> wrote:
> Hi,
> 
> Could you share the whole reproducer with all configurations and required 
> methods?
> 
> BR,
> Andrei
> 
> 8/12/2019 4:48 PM, Marco Bernagozzi пишет:
>> I have a set of nodes, and I want to be able to set a cache in specific 
>> nodes. It works, but whenever I turn on a new node the cache is 
>> automatically spread to that node, which then causes errors like: 
>> Failed over job to a new node ( I guess that there was a computation going 
>> on in a node that shouldn't have computed that, and was shut down in the 
>> meantime). 
>> 
>> I don't know if I'm doing something wrong here or I'm missing something. 
>> As I understand it, NodeFilter and Affinity are equivalent in my case 
>> (Affinity is a node filter which also creates rules on where can the cache 
>> spread from a given node?). With rebalance mode set to NONE, shouldn't the 
>> cache be spread on the "nodesForOptimization" nodes, according to either the 
>> node filter or the affinityFunction? 
>> 
>> Here's my code: 
>> 
>> List nodesForOptimization = fetchNodes(); 
>> 
>> CacheConfiguration graphCfg = new 
>> CacheConfiguration<>(graphCacheName); 
>> graphCfg = graphCfg.setCacheMode(CacheMode.REPLICATED) 
>> .setBackups(nodesForOptimization.size() - 1) 
>> .setAtomicityMode(CacheAtomicityMode.ATOMIC) 
>> .setRebalanceMode(CacheRebalanceMode.NONE) 
>> .setStoreKeepBinary(true) 
>> .setCopyOnRead(false) 
>> .setOnheapCacheEnabled(false) 
>> .setNodeFilter(u -> nodesForOptimization.contains(u.id())) 
>> .setAffinity( 
>> new RendezvousAffinityFunction( 
>> 1024, 
>> (c1, c2) -> nodesForOptimization.contains(c1.id()) && 
>> nodesForOptimization.contains(c2.id()) 
>> ) 
>> ) 
>> 
>> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);



Re: IgniteCache.destroy() taking long time

2019-08-14 Thread Denis Mekhanikov
Folks,

Partition map exchange (PME) will actually happen in both cases: PARTITIONED 
and REPLICATED.
We need to understand, which part of the destroy is the longest.

If you enable INFO logs, then you’ll see messages about partition map exchange 
happening when you destroy caches.
Check, whether PME takes most time, or it’s fast and something else is blocking 
the destruction.

Could you tell, how many nodes you have in your cluster?

Denis

> On 14 Aug 2019, at 18:11, Alexander Kor  wrote:
> 
> Hi,
> Can you please share your cache configuration.  How many nodes do you 
> have in your cluster?
> If you are running in PARTITONED mode then some exchange of information 
> will occur.  
> More details here: https://apacheignite.readme.io/docs/cache-modes 
> 
> Do you have a reproducer project?  
> Thanks, Alex
>   
> 
> On Wed, Aug 14, 2019 at 1:22 AM Shravya Nethula 
>  > wrote:
> Hi, 
> 
> I have created a cache using the following API: 
> IgniteCache cache = (IgniteCache) 
> ignite.getOrCreateCache(cacheCfg); 
> 
> Now when i try to delete the cache using IgniteCache.destroy() API, it is 
> taking about 12-13 seconds. 
> 
> Why is it taking more execution time? Will there be any exchange of cache 
> information among the nodes whenever a cache is deleted? 
> Is there any way in which, the execution time can be optimized? 
> 
> Regards, 
> Shravya Nethula.
> 
> 
> 
> Regards,
> Shravya Nethula,
> BigData Developer,
> 
> Hyderabad.
> 



Re: about single service redistribution after node restart

2019-07-03 Thread Denis Mekhanikov
Hi!

Currently services don't get redistributed if new nodes join a topology.
So, if you have all services deployed on one node, then they won't be moved
to newly joined ones.
This is a known issue. The following ticket mentions it:
https://issues.apache.org/jira/browse/IGNITE-7667
Currently even distribution can be achieved by redeploying the services.
Also if you add the third node, and then kill the first one (containing all
the services), then the two nodes, that are left in the cluster, will host
an even number of services.

Denis

пн, 1 июл. 2019 г. в 19:21, 李奉先 :

> My cluster has two nodes.When one of the
> node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  is down and restarted, the
> single service will not be redistributed  and only normal
> nodes(uuid:d44abb38-b870-42c7-8c37-ca5e9cee3232)  exist.
>
> I print service topologySnapshot
> before one node(uuid:22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe)  reboot :
> =
> scheduleQ_1
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=0
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=1
> =
> scheduleQ_2
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
> =
> scheduleQ_3
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=0
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=1
> =
> scheduleQ_4
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
>
> after reboot one node:
> =
> scheduleQ_1
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
> =
> scheduleQ_2
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
> =
> scheduleQ_3
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
> =
> scheduleQ_4
> uuid=d44abb38-b870-42c7-8c37-ca5e9cee3232
> integer=1
> uuid=22ff0f84-9d6c-4b7f-9e71-8892c0dcc7fe
> integer=0
>
>
> I hope the  single service can be redistributed in two nodes
> How do I do it?
>
> Regards,
>


Re: Connect external application to Ignite cluster via REST

2019-06-17 Thread Denis Mekhanikov
Kushan,

I would recommend using one of thin clients, since they have better
performance comparing to REST.
They work over a binary protocol, while REST needs data to be serialized as
strings.

Connectors for thin clients are enabled on all nodes by default, so you can
any one to connect.
Find more in the documentation:
https://apacheignite.readme.io/docs/thin-clients

Denis

пт, 14 июн. 2019 г. в 20:10, Kishan :

> We have ignite cluster up with four nodes. This cluster is hydrated with
> data
> into its cache. We want an external application to connect with cluster and
> get all the data objects of particular cache from the cluster via REST.
>
> I am thinking of following approaches:
> 1) Using ignite's service grid and deploy service into the grid, But I am
> not able to figure out a way to consume that service via REST call.
> 2) Someone suggested to bring up the service using ignite's compute task
> which can be invoked via REST call. The problem with approach is that the
> compute task won't return set of objects to REST client.
> 3) Bringing up the Jetty web server using service deployed on service
> grid(starting server in init method of service), and then make REST client
> to connect with that web server.
> 4) Using thin clients provided by ignite(Problem with this is that client
> will connect to only one cluster node, what if that node goes down)
> 5) Using REST APIs provided by ignite(As we want all the data objects of
> one
> cache, we are thinking of using REST APIs which executes SQL query and
> return output objects from cluster's cache)
>
> Which approach will work and which is the best one?
>
> Thanks in advance.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: loading a cache into memory after startup

2019-06-13 Thread Denis Mekhanikov
Hi!

You can use a scan query, that will iterate over the whole dataset and
bring it to memory. You don't need to perform any specific processing. Just
touching it is enough for it to appear in memory.
In version 2.8 there will be a new API, allowing preloading partitions.
IgniteCache.preloadPartition(...) will let you bring all data associated
with the provided partition to memory.
Here is the JIRA ticket, that introduces this ability:
https://issues.apache.org/jira/browse/IGNITE-10019

Denis

чт, 13 июн. 2019 г. в 07:44, mahesh76private :

> Hi,
>
> After Ignite cluster (w. data, assuming about 50GB of data) starts up we
> are
> experiencing significant delays before data is accessible.
>
>
> Understandably, ignite is bringing data from backup on disk into memory
> based on query.
>
> is there a way, where we as much data as possible from backup (disk) into
> memory (allocated in config XML, see below) at one shot after a cluster
> start up?
>
> *
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> *
>
> Right now, data seems to come into memory based on queries. So when we have
> large tables with say 12 million records, some of the queries wait for
> about
> 3-4 minutes...
>
> regards
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re:Re:RE: Re: Node can not join cluster

2019-05-22 Thread Denis Mekhanikov
Vishalan,

Please create a new thread and provide information about your setup and
logs.
The OP doesn't seem to be getting your questions.

Denis

пн, 20 мая 2019 г. в 12:44, Vishalan :

> What was the solution to above problem.I am facing the same issue
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Trouble with continuous queries

2019-05-22 Thread Denis Mekhanikov
Mike,

Could you show the code, that you use to register the continuous query?
Maybe there is some misconfiguration?

Denis

пн, 20 мая 2019 г. в 17:47, Mike Needham :

> Hi All,
>
> I have a cache that is running and is defined as
> IgniteCache exchCache =
> ignite.getOrCreateCache(new CacheConfiguration<>("EXCHANGE")
> .setIndexedTypes(Long.class, Exchange.class)
> .setAtomicityMode(CacheAtomicityMode.ATOMIC)
> .setBackups(0)
> );
>
> from a dotnet client how can I get a continuous query so that I am
> notified of the changes to the cache?  I can access the cache VIA DBeaver
> and other sql tools.
>
> the documentation does not make it clear how to set this up.  I want ALL
> changes to the cache to be sent to the client.  The DOTNET example does not
> appear to work for this scenario.  It is using a simple  for
> cache.  I have tried  but it does not appear to ever be
> notified of events
>
> On Tue, May 14, 2019 at 10:55 AM Denis Mekhanikov 
> wrote:
>
>> Mike,
>>
>> First of all, it's recommended to have a separate cache per table to
>> avoid storing of objects of different types in the same cache.
>>
>> Continuous query receives all updates on the cache regardless of their
>> type. Local listener is invoked when new events happen. Existing records
>> can be processed using initial query.
>>
>> Refer to the following documentation page for more information:
>> https://apacheignite.readme.io/docs/continuous-queries
>>
>> Denis
>>
>> чт, 2 мая 2019 г. в 14:14, Mike Needham :
>>
>>> I have seen that example, what I do not understand is I have two SQL
>>> tables in a cache that has n number of nodes.  it is loaded ahead of time
>>> and a client wants to be notified when the contents of the cache are
>>> changed.  Do you have to have the continuous query in a never ending loop
>>> to not have it end?  All the examples are simply using ContinuousQuery<
>>> Integer, String>. my example uses  which is a
>>> java class defining the structure.  do I just set-up a
>>> ContinuousQuery
>>>
>>> On Thu, May 2, 2019 at 3:59 AM aealexsandrov 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> The good example of how it can be done you can see here:
>>>>
>>>>
>>>> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ContinuousQueryExample.java
>>>>
>>>> You can set remote listener to handle changes on remote nodes and local
>>>> listers for current.
>>>>
>>>> Note that you will get the updates only until ContinuousQuery will not
>>>> be
>>>> closed or until the node that starts it will not left the cluster.
>>>>
>>>> Also, you can try to use CacheEvents like in example here:
>>>>
>>>> https://apacheignite.readme.io/docs/events#section-remote-events
>>>>
>>>> Note that events can affect your performance.
>>>>
>>>> BR,
>>>> Andrei
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>
>>>
>>>
>>> --
>>> *Don't be afraid to be wrong. Don't be afraid to admit you don't have
>>> all the answers. Don't be afraid to say "I think" instead of "I know."*
>>>
>>
>
> --
> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
> the answers. Don't be afraid to say "I think" instead of "I know."*
>


Re: Integrity of write behind

2019-05-17 Thread Denis Mekhanikov
John,

Entries are queued for persisting only on primary nodes, so if it fails
before writing all updates to the underlying database, then it will result
in some entries not being written to the database at all.
This is the price for a better performance, that write behind provides.

Take a look at the following thread for more information:
http://apache-ignite-users.70518.x6.nabble.com/Data-lost-when-using-write-behind-td4265.html


Denis

пт, 17 мая 2019 г. в 06:02, Coleman, JohnSteven (Agoda) <
johnsteven.cole...@agoda.com>:

> Hi,
>
>
>
> What happens if a node goes down while write behind is in progress on a
> cache that’s persisting to the database? Can the task of persistence be
> carried on exactly where it failed by a back up node? Are cache entries
> flagged when they have been successfully persisted so that another node can
> pick up the task later? Should the persistence layer keep a version number
> or similar so that updates are orderly and not duplicated?
>
>
>
> John
>
> --
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
>


Re: Store raw binary value in Apache Ignite through thin python client

2019-05-16 Thread Denis Mekhanikov
Hi!

Thanks for the report! Seems like the implementation of serialization of
primitive arrays is not optimal.
I created a JIRA ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-11854

Denis

вт, 14 мая 2019 г. в 13:53, kulinskyvs :

> I'm trying to save some raw binary data into Apache Ignite using thin
> Python
> client and the process is very slow (actually it looks like it depends on
> the size of the data).
>
> For my simple test case I've started locally Ignite with single node
> (version 2.7.0 without any specific configuration):
>
>
> Also, I have a binary file with the content I want to store in Ignite. The
> size if around 6MB. Here is my source code:
>
>
> After *my_cache.put *the process just freezes. I haven't even managed to
> wait till the end (I've terminated it after 2 minutes).
>
> However, passing the file content transformed into a string works very fast
> .
>
>
> What I'm missing?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.lang.NoSuchMethodError: org.apache.ignite.IgniteCache.getName()Ljava/lang/String;

2019-05-16 Thread Denis Mekhanikov
Tomasz,

There is no such Ignite version as 2.8.0. The latest one is 2.7.0.
Please make sure that all dependencies are resolved correctly.

Denis

пн, 13 мая 2019 г. в 12:50, Tomasz Prus :

> Hi,
> i'm trying to set up Ignite 2.8.0 in our application but i get such error:
>
> java.lang.NoSuchMethodError:
> org.apache.ignite.IgniteCache.getName()Ljava/lang/String;
> at org.apache.ignite.cache.spring.SpringCache.getName(SpringCache.java:53)
> at
> org.springframework.cache.interceptor.CacheAspectSupport$CacheOperationContext.createCacheNames(CacheAspectSupport.java:756)
> at
> org.springframework.cache.interceptor.CacheAspectSupport$CacheOperationContext.(CacheAspectSupport.java:670)
> at
> org.springframework.cache.interceptor.CacheAspectSupport.getOperationContext(CacheAspectSupport.java:237)
> at
> org.springframework.cache.interceptor.CacheAspectSupport$CacheOperationContexts.(CacheAspectSupport.java:570)
> at
> org.springframework.cache.interceptor.CacheAspectSupport.execute(CacheAspectSupport.java:317)
> at
> org.springframework.cache.interceptor.CacheInterceptor.invoke(CacheInterceptor.java:61)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> at
> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
> at com.sun.proxy.$Proxy235.findMemberUserIdAnPermissionSetId(Unknown
> Source)
>
> We use Spring version 5.0.7, but I have tried with several newer versions.
> Can You help a little?
>


Re: Ignite WebSessionFilter client threads eating up CPU usage

2019-05-16 Thread Denis Mekhanikov
Patrick,

Could you explain a bit more, why do you think that striped pool is the
reason?
Striped pool is where cache operations happen. So, if you see, that it
consumes a lot of CPU, then it probably means, that a lot of cache
operations are coming to the cluster.

You can record a JFR and see, where most time is spent. For more
information refer to the following page:
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-flightrecorder-settings

Denis

вт, 14 мая 2019 г. в 10:43, wbyeh :

> We are using Ignite 2.7 as a 4-server-node session cluster but recent weeks
> we found some Ignite WebSessionFilter client threads eating up CPU usage
> after days of running.
> The GC is running smoothly.
>
> How to address the root cause?
> Is it a Ignite 2.x known issue with StripedExecutor?
>
> Can experts here point out any solutions?
>
> Attached file contained thread dump & 'top' usage.
>
> 811261.jstack
> 
>
>
> Thank you!
>
> -Patrick
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Metrics to get size of cache in memory

2019-05-15 Thread Denis Mekhanikov
Gupabhi,

Memory-related metrics are available on data region level. You can see, how
much space is occupied by each data region on every node.
You can find available metrics here:
https://apacheignite.readme.io/docs/memory-metrics

Denis

ср, 15 мая 2019 г. в 01:21, gupabhi :

> Hello,
>  I'm looking to get memory (heap/off heap) consumed by my cache across
> the cluster. If possible also per node. What is the best way of doing this.
>
> Thanks,
> Gupabhi
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Timestamp with Python thin client

2019-05-14 Thread Denis Mekhanikov
Stéphane,

Could you provide the code, that results in this exception?
Do you try to insert the tuple as a single field via SQL? There is no such
primitive as a tuple in SQL, so you should probably split timestamp into
datetime and nanoseconds columns and store them separately as two different
columns.

Denis

сб, 4 мая 2019 г. в 10:39, Stéphane Thibaud :

> Hello Apache users,
>
> I am running into the following issue: when I try to store a timestamp
> with nanosecond precision with the Python Thin client, I get the stack
> trace below. I have specified the timestamp as a tuple of (datetime,
> nanoseconds) as that is the format in which I also get timestamps back from
> the apache ignite client. Strangely, I can set just a datetime, but then
> the nanoseconds become zero. Am I doing it in the wrong way? Any
> suggestions?
>
>
>
> db.sql(query, query_args=[converted_row[c] for c in
> table.column_names])
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/client.py",
> line 401, in sql
> max_rows, timeout,
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/api/sql.py",
> line 370, in sql_fields
> 'include_field_names': include_field_names,
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/queries/__init__.py",
> line 260, in from_python
> buffer += c_type.from_python(values[name])
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/datatypes/internal.py",
> line 471, in from_python
> buffer += infer_from_python(x)
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/datatypes/internal.py",
> line 399, in infer_from_python
> if is_hinted(value):
>   File
> "/home/snthibaud/PycharmProjects/tabee/venv/lib/python3.7/site-packages/pyignite/utils.py",
> line 51, in is_hinted
> and issubclass(value[1], IgniteDataType)
>   File "/usr/lib/python3.7/abc.py", line 143, in __subclasscheck__
> return _abc_subclasscheck(cls, subclass)
> TypeError: issubclass() arg 1 must be a class
>
> Kind regards,
>
> Stéphane Thibaud
>
>
>


Re: Two DataStreamer pods for load balancing on Kubernetes

2019-05-14 Thread Denis Mekhanikov
Sheshananda,

> we are not getting any logs for the 2nd pod
Do you mean that the client node with the data streamer doesn't join the
cluster? Or the node joins, but just doesn't receive any data from Kafka?

Denis

пт, 3 мая 2019 г. в 06:26, sheshananda :

> HI,
>
> I am using DataStreamer to load data from KAFKA to IGNITE.
>
> When we inject data with one DataStreamer pod we are able to see logs on
> DataStreamer pod.
>
> Similarly, when we try with with two DataStreamer  pods  logs are coming
> only on 1st pod. we are not getting any logs for the 2nd pod.
>
> Both DataStreamer pods are in same consumerGroup.
>
> Please let me know the configurations to use multiple DataStreamer for load
> balancing.
>
> Thanks and Regards
> Sheshananda
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Trouble with continuous queries

2019-05-14 Thread Denis Mekhanikov
Mike,

First of all, it's recommended to have a separate cache per table to avoid
storing of objects of different types in the same cache.

Continuous query receives all updates on the cache regardless of their
type. Local listener is invoked when new events happen. Existing records
can be processed using initial query.

Refer to the following documentation page for more information:
https://apacheignite.readme.io/docs/continuous-queries

Denis

чт, 2 мая 2019 г. в 14:14, Mike Needham :

> I have seen that example, what I do not understand is I have two SQL
> tables in a cache that has n number of nodes.  it is loaded ahead of time
> and a client wants to be notified when the contents of the cache are
> changed.  Do you have to have the continuous query in a never ending loop
> to not have it end?  All the examples are simply using ContinuousQuery<
> Integer, String>. my example uses  which is a java
> class defining the structure.  do I just set-up a   ContinuousQuery Exchange.class>
>
> On Thu, May 2, 2019 at 3:59 AM aealexsandrov 
> wrote:
>
>> Hi,
>>
>> The good example of how it can be done you can see here:
>>
>>
>> https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ContinuousQueryExample.java
>>
>> You can set remote listener to handle changes on remote nodes and local
>> listers for current.
>>
>> Note that you will get the updates only until ContinuousQuery will not be
>> closed or until the node that starts it will not left the cluster.
>>
>> Also, you can try to use CacheEvents like in example here:
>>
>> https://apacheignite.readme.io/docs/events#section-remote-events
>>
>> Note that events can affect your performance.
>>
>> BR,
>> Andrei
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> *Don't be afraid to be wrong. Don't be afraid to admit you don't have all
> the answers. Don't be afraid to say "I think" instead of "I know."*
>


Re: sizing

2019-05-14 Thread Denis Mekhanikov
Clay,

If you want to store plain strings without any schema or markup, then use
varchar.
But if you plan to store POJOs, then binary objects should certainly be
used instead of varchar. Binary types contain meta information, improving
type safety of stored data.

Binary objects don't apply any compression. This improvement is planned to
be implemented on data storage level, not the objects format.
IEP related to data compression:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-20%3A+Data+Compression+in+Ignite

Denis

ср, 1 мая 2019 г. в 19:33, Evgenii Zhuravlev :

> Hi,
>
> What do you mean here? Do you mean binary fields VS varchars? Could you
> please give an example of the object?
>
> Thank you,
> Evgenii
>
> вт, 30 апр. 2019 г. в 12:27, Clay Teahouse :
>
>> Hi All
>> If I choose binary data type for my objects, as opposed to varchar, will
>> it result in any saving, and yes, how much?
>> I know that binary type would be faster to read/write but wanted to see
>> if there will be any saving in storage.
>>
>> thanks
>> Clay
>>
>


Re: Error in running wordcount hadoop example in ignite

2019-04-02 Thread Denis Mekhanikov
Hi!

As far as I can see, you tried running examples from Hadoop 2.7.7
But Ignite uses Hadoop version 2.4.1 internally.
So, I would start from checking the same examples for the matching version
of Hadoop (2.4.1).

Denis

чт, 28 февр. 2019 г. в 08:58, mehdi sey :

> hi
> i want to execute wordcount example of hadoop in apache ignite. i have used
> apache-ignite-hadoop-2.6.0-bin for execute map reduce tasks. my
> default-config.xml in apache-ignite-hadoop-2.6.0-bin/config folder is just
> as bellow:
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>http://www.springframework.org/schema/util/spring-util.xsd;>
>
>
> 
> Spring file for Ignite node configuration with IGFS and Apache
> Hadoop map-reduce support enabled.
> Ignite node will start with this configuration by default.
> 
>
>
> 
> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>  value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
> 
> 
>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>
> 
>  class="org.apache.ignite.configuration.ConnectorConfiguration">
> 
> 
> 
>
> 
> 
>
>  static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
>  static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>  static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>  static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>  static-field="org.apache.ignite.events.EventType.EVT_TASK_TIMEDOUT"/>
> 
> static-field="org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET"/>
>  static-field="org.apache.ignite.events.EventType.EVT_TASK_REDUCED"/>
>
>
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
> 
> static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
> 
> 
>
> i have run an ignite node with bellow command in command line in linux
> ubuntu:
> */usr/local/apache-ignite-hadoop-2.6.0-bin/bin/ignite.sh
> /usr/local/apache-ignite-hadoop-2.6.0-bin/config/default-config.xml*
>
> after starting ignite node i execute a wordcount example in hadoop for
> ruuning in ignite with bellow command:
>
> *./hadoop jar
>
> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar
> wordcount /input/hadoop output2*
>
> but after executing above command i have encounter an error just as
> attached
> image. please help about solving problem. i have seen also below link but
> it
> could not help.
>
> http://apache-ignite-users.70518.x6.nabble.com/NPE-issue-with-trying-to-submit-Hadoop-MapReduce-tc2146.html#a2183
> 
> 
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: What's the difference between client node, jdbc driver and thin client

2019-03-25 Thread Denis Mekhanikov
In general client nodes have better performance than thin clients, but they
require more resources and have stronger coupling with the rest of the
cluster.
Thin client and thin JDBC driver have about the same performance, so
everything depends on a use-case.

So, your understanding is quite correct.

Denis

сб, 23 мар. 2019 г. в 18:12, kcheng.mvp :

> from the document
>
> client node: https://apacheignite.readme.io/docs/clients-vs-servers
> jdbc driver: https://apacheignite-sql.readme.io/docs/jdbc-client-driver
> thin client: https://apacheignite.readme.io/docs/java-thin-client
>
>
> when a node runs with *client mode*, it will join the cluster topology.
> when access ignite cluster with jdbc driver, it just behavior as a normal
> database driver(as mysql jdbc driver)
> when access cluster via * think client *, in the document it addresses that
> * it does not join the cluster topology.
>
> it seems there are are no difference between * jdbc driver * and * thin
> client* expect for the * cache api*
>
> for performance consideration: client node > think client > jdbc driver.
>
> is my understanding correct?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question about SQL query when persistence enabled

2019-03-25 Thread Denis Mekhanikov
Justin,

You can think of your dataset as if there was no separation between disk
and memory. So, all data is always available for all SQL operations.

Data, which is available in memory will be used right away. Data, which is
present on disk only will be loaded into memory first, and after that used
by the query.

You can read about page rotation here:
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Durable+Memory+-+under+the+hood
But I'm not sure, that you really need such details to understand the
principles of this mechanism.

Denis

пн, 25 мар. 2019 г. в 10:19, Justin Ji :

> According to the document, we know that there is only a subset of data in
> memory while the full data set in the disk when persistence enabled. I have
> some questions with regards to this point:
>
> Let's say I have a SQL like:
> select * from city where provinceId = '1';
> 1. If some data is in memory while others in the disk,  would ignite query
> data from disk? After the query, would those data in the disk be loaded to
> the memory?
> 2. If all data is in the memory, would ignite query data from disk? Or only
> query from disk?
> 3. If all data is in the disk, would ignite query from memory?
>
> 4. Is there a document can explain the mechanism in detail?
>
> Looking forward to your reply!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: InvalidClassException local class incompatible for custom cache store factory

2019-03-20 Thread Denis Mekhanikov
Ken,

Cache store factory is a part of a cache configuration. So, if a cache
configuration is stored somewhere, and it's deserialized using a new class,
then this exception is thrown.
It's possible if you have native persistence enabled, for example. The
cache configuration is read from disk when a node starts.

Cache configurations are sent to newly joined nodes along with discovery
data. If you want to change the cache store implementation, then you should
destroy the cache and start it with a new config.
Different implementations of cache store on different nodes is not a
correct situation, so you should avoid it.

Denis

вт, 19 мар. 2019 г. в 20:30, relax ken :

> Hi,
>
> I am testing my custom CacheStoreFactory. It worked fine previously on my
> local dev machine. After I changed this class and run it, I got
> `java.io.InvalidClassException: xxx.CacheStoreFactory; local class
> incompatible: stream classdesc serialVersionUID = 7199421607011991053,
> local class serialVersionUID = -7841220943604923990`
>
> Is it the previous serialized class cached somewhere and incompatible with
> the new one? I only have this single node. There is no other nodes.
>
> Another question is I try to understand why CacheStoreFactory and
> CacheStore are serialized to newly joined node. Does it mean if I changed
> their implementation and release the changes to a new node, the new node
> will still perform as the old one?
>
> Very appreciate any help.
>
> Thanks,
>
> Ken
>


Re: cache.removeAll cache operation is slow

2019-01-22 Thread Denis Mekhanikov
Prasad,

When you run a transaction, that involves many entries, the whole key set
is sent between nodes multiple times.
It also generates a lot of garbage, which cannot be released until the
transaction is committed.
It's better to remove values in small batches.
So, try changing the batch size. It's possible, that singular removes will
work faster.

Denis

вт, 22 янв. 2019 г. в 16:37, Prasad Bhalerao :

> Hi,
>
> I am removing around 2 million entries from cache using cache.removeAll
> api.
> I removing entries in a batch of 100k.  To remove 2 million entries ignite
> takes around 218 seconds.
> This cache is transactiona and write synchronization mode is full sync.
>
> Is there any way to improve the performance of removeAll operation?
>
> Thanks,
> Prasad
>
>
>


Re: ignite zk: Possible starvation in striped pool

2019-01-22 Thread Denis Mekhanikov
This message is printed, when a thread in striped pool doesn't have any
progress for some time.
As far as I can see from the thread dump, a TCP connection with another
node cannot be established for some reason.
It's probably caused by network problems or long GC on one of the nodes.
This is about communication SPI, and doesn't have anything to do with
ZooKeeper.

Denis

вт, 22 янв. 2019 г. в 15:38, wangsan :

> 10:38:31.577 [grid-timeout-worker-#55%DAEMON-NODE-10-153-106-16-8991%]
> WARN
> o.a.ignite.internal.util.typedef.G  - >>> Possible starvation in striped
> pool.
> Thread name: sys-stripe-9-#10%DAEMON-NODE-10-153-106-16-8991%
> Queue: []
> Deadlock: false
> Completed: 17156
> Thread [name="sys-stripe-9-#10%DAEMON-NODE-10-153-106-16-8991%", id=38,
> state=RUNNABLE, blockCnt=0, waitCnt=17103]
>
> at sun.nio.ch.Net.poll(Native Method)
>
>
> at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:954)
>
>
> at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:110)
>
>
> at
>
> o.a.i.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3262)
> at
> o.a.i.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2958)
>
> at
> o.a.i.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2841)
>
> at
> o.a.i.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2692)
>
> at
> o.a.i.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2651)
>
> at
> o.a.i.i.managers.communication.GridIoManager.send(GridIoManager.java:1643)
>
>
> at
> o.a.i.i.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1715)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1160)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1199)
>
>
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.sendDhtRequests(GridDhtAtomicAbstractUpdateFuture.java:466)
>
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:423)
>
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805)
>
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>
> at
>
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3056)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:130)
>
> at
>
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:266)
> at
> o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:261)
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1054)
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>
>
> at
> o.a.i.i.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>
>
> at
> o.a.i.i.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>
>
> at
> o.a.i.i.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>
> at
> o.a.i.i.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>
>
> at
> o.a.i.i.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>
>
> at
> o.a.i.i.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>
>
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there Ignite Eviction low water mark

2019-01-22 Thread Denis Mekhanikov
In 2.x page eviction removes entries from cache from all nodes,
and EVT_CACHE_ENTRY_EVICTED events are not triggered, when it happens.
Please file a JIRA issue, if you rely on this event:
https://issues.apache.org/jira/

> The implementation of eviction from 1.x to 2.x is a different, new one?
Yes, data storage and eviction mechanisms are completely different between
1.x and 2.x.

Denis

пн, 5 нояб. 2018 г. в 21:46, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> In Ignite 1.x vs 2.x, when we get notified about an eviction (from cache
> events),  will Ignite literally delete the entry, or can they be remain
>  for certain period and later get wiped out.
>
>
>
> The implementation of eviction from 1.x to 2.x is a different, new one?
>
>
>
>
>
> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
> *Sent:* Monday, November 05, 2018 7:22 AM
> *To:* user
> *Subject:* Re: Is there Ignite Eviction low water mark
>
>
> This email is from an external source - exercise caution regarding links
> and attachments.
>
> Hi!
>
>
>
> Ignite 2.x has a mechanism called page eviction
> <https://apacheignite.readme.io/v2.6/docs/evictions#section-off-heap-memory>.
> It's configured using DataRegion#pageEvictionMode
> <https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataRegionConfiguration.html#setPageEvictionMode-org.apache.ignite.configuration.DataPageEvictionMode->
> .
>
> Page eviction removes entries from a data region until either
> DataRegionConfiguration#evictionThreshold
> <https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataRegionConfiguration.html#setEvictionThreshold-double->
>  is
> reached,
>
> or DataRegionConfiguration#emptyPagesPoolSize
> <https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/DataRegionConfiguration.html#setEmptyPagesPoolSize-int->
>  pages
> are available in the free list.
>
> It's applied only when persistence is disabled. Otherwise data is just
> spilled to disk.
>
>
>
> Ignite 1.x has a different kind of eviction, since it doesn't have page
> memory nor data regions.
>
> It removes data until occupied memory is bellow LruEvictionPolisy#maxSize
> <https://ignite.apache.org/releases/1.9.0/javadoc/org/apache/ignite/cache/eviction/lru/LruEvictionPolicy.html#setMaxSize(int)>
> .
>
> This is similar to on-heap eviction policy
> <https://apacheignite.readme.io/v2.6/docs/evictions#section-on-heap-cache>
> in Ignite 2.x, but you don't need to use it
>
> unless you know exactly what you're doing and what an on-heap cache is.
>
>
>
> Denis
>
>
>
> пт, 2 нояб. 2018 г. в 21:35, HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com>:
>
> Hi all,
>
> This is to understand how eviction works in Ignite cache.
>
>
>
> For example, let’s say the LRU eviction policy is set to max 100MB. Then,
> when the cache size reached 100MB, how much of LRU entries will get evicted
> ? Is there any low water mark/percentage ? Like eviction policy will remove
> 20% of the cache, and then let it again reach back to 100MB to clean up
> again.
>
>
>
> Also please confirm whether the behavior is same in Ignite 1.9 vs 2.6.
>
>


Re: Ignite Data streamer optimization

2018-12-28 Thread Denis Mekhanikov
To achieve the best data streaming performance, you should
aim at highest utilization of resources on data nodes.
There is no silver bullet for data streamer tuning, unfortunately.
Try changing parameters and see, how they affect the utilization and
overall performance.

For me default data streamer parameters usually work fine.
Your value of perNodeBufferSize looks too big. It's measured in records,
not bytes.
By default it's 512.

Denis

пт, 28 дек. 2018 г. в 14:32, ashishb888 :

> I am using below settings:
> allowOverwrite: false
> nodeParallelOperations: 1
> autoFlushFrequency: 10
> perNodeBufferSize: 500
>
>
> My records size is around 2000 bytes. And see the
> "grid-data-loader-flusher"
> thread stats as below:
>
> Thread  Count   Average Longest Duration
> grid-data-loader-flusher-#100   38  4,737,793.579   30,427,862
> 180,036,156
>
> What would be the best configurations for Data streamer?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Multithreading in Compute Task

2018-12-28 Thread Denis Mekhanikov
Both approaches are valid, so you may use whatever works better for you.
Just make sure not to run tasks, that may end up in the same thread pool,
if you depend on their results. Otherwise thread pool starvation is
possible.

Custom thread pools are designed for executing tasks from other tasks:
https://apacheignite.readme.io/docs/thread-pools#section-custom-thread-pools
But if you only need to perform an asynchronous operation, then a simple
Java thread pool could work.

Denis


сб, 22 дек. 2018 г. в 20:43, kellan :

> Hi,
> I have a compute task that needs to query the local data set several
> hundred
> times per task. Creating another task in a custom thread pool seems to be
> the preferred method of multithreading within an already running compute
> task, but I find the overhead a bit slow. I've tried running Scala futures
> within the task, which gives me the desired results, but I've noticed some
> unexpected behavior, like the task freezing (possibly thread pool
> starvation, I'm not sure). Is there a safe, efficient way to run futures
> within a task that I know are only going to operate only on the local data
> set?
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is Apache planning to release support for table joins with updates .

2018-12-28 Thread Denis Mekhanikov
Try substituting personalDetails.F_id with a value from the record,
that you expect to be updated, and try running the subquery on its own.
Make sure, that the result is not empty.

Denis

пт, 28 дек. 2018 г. в 13:14, DS :

> Denis Mekhanikov wrote
> > Is it a single query? Why not execute these two updates separately?
> > I don't see any problems in the first one.
> >
> > The second one needs some refactoring though.
> > Ignite uses an SQL query engine of H2, which doesn't support JOINs in
> > UPDATE statements.
> > But you may change it in the following way:
> >
> > UPDATE table1 SET table1.quantity = 6 WHERE EXISTS
> >
> > (SELECT * FROM table2 WHERE table2.pid=table1.pid AND table2.zip
> > ='abc')
> > ;
> >
> >
> > Denis
> >
> > чт, 27 дек. 2018 г. в 12:49, DS 
>
> > deepika.singh@
>
> > :
> >
> >> I am  looking solution  for queries  like below  :
> >>
> >> Update Person SET  lastName ='pit' where person.id =city.id AND
> >> city.zipCode
> >> ='B8Q97'
> >>
> >> OR
> >>
> >> UPDATE table1, table2 SET table1.quantity = 6 where table1.pid
> >> =table2.pid
> >> AND  table2.zip ="abc"
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
>
>  Query i ran  over GridGain :
>
> UPDATE personalDetails SET personalDetails.age = 26
> WHERE EXISTS (SELECT * FROM cityDEtails WHERE
>  cityDEtails.id=personalDetails.F_id AND cityDEtails.pincode = 560102)
>
> It ran but didn't update the value ?? .
>
> any idea why?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can Ignite code throw any RuntimeExceptions ?

2018-12-28 Thread Denis Mekhanikov
Usually Ignite methods throw exceptions with some
general description and a cause exception wrapped into them.

I can't remember, if there is any place, where a non-Ignite exception is
thrown from Ignite
method. Let us know, if you find one.

Denis

чт, 27 дек. 2018 г. в 09:59, userx :

> Hi
>
> Let say we have a java program which starts Ignite in a client mode. Let us
> say there is a RuntimeException which happens in the Ignite code (on the
> client side only), would it be wrapped as an IgniteException and passed to
> the Java client or would it be thrown as a RuntimeException to the java
> program under any circumstances ?
>
> I have seen most of the methods declaring throws IgniteException but just
> thinking about any scenario where in a RuntimeException occurs and get
> passed to the client.
>
> Regards
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can we put a timeout on Ignite.close ?

2018-12-28 Thread Denis Mekhanikov
Ignite.close() effectively does exactly what you described.
It tries to stop all threads as fast as possible by interrupting them.
But some threads may be doing some blocking operations at the moment,
so they are not killed instantaneously.
For example, if you try to stop a node in the middle of a checkpoint, then
the close() method may hang until a disk write operation finishes.

You can take a thread dump and analyze it to figure out, what's blocking
the close. Logs also may contain useful information on this matter.

If you need to stop Ignite asynchronously, you can start a new thread
and call Ignite#close() from it.

Denis

ср, 26 дек. 2018 г. в 16:55, userx :

> Hi
>
> Is there a way we can put a timeout on close operation of Ignite. For an
> example I have a java program which does some computation and then starts
> the Ignite in client mode so that the computed numbers can be written to
> cache on Ignite Servers in persistent mode. Say if there is an execution
> exception, I would want the java program to call close on the Ignite
> started
> in the client mode and return to its normal processing with in a specified
> time say 5 seconds or so ?
> Which means that if the ignite in the client mode is not getting closed in
> 5
> seconds, all the ignite threads started on the client side should be
> interrupted and should be closed/killed/stopped.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is Apache planning to release support for table joins with updates .

2018-12-28 Thread Denis Mekhanikov
Is it a single query? Why not execute these two updates separately?
I don't see any problems in the first one.

The second one needs some refactoring though.
Ignite uses an SQL query engine of H2, which doesn't support JOINs in
UPDATE statements.
But you may change it in the following way:

UPDATE table1 SET table1.quantity = 6 WHERE EXISTS

(SELECT * FROM table2 WHERE table2.pid=table1.pid AND table2.zip ='abc')
;


Denis

чт, 27 дек. 2018 г. в 12:49, DS :

> I am  looking solution  for queries  like below  :
>
> Update Person SET  lastName ='pit' where person.id =city.id AND
> city.zipCode
> ='B8Q97'
>
> OR
>
> UPDATE table1, table2 SET table1.quantity = 6 where table1.pid =table2.pid
> AND  table2.zip ="abc"
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Index inline size

2018-12-27 Thread Denis Mekhanikov
Prasad,

By default *QuerySqlField#inlineSize *is equal to -1, which means, that it
will be chosen automatically.
*CacheConfiguration#setSqlIndexMaxInlineSize *specifies the maximal
automatically calculated
inline size for a cache.
But if *QuerySqlField#inlineSize *is not -1, then it will be used
regardless of the configured maximum.
It should only be less than 2048. Otherwise 2048 will be used.

Judging by the warning message, value 10 is used for the inline size.
Did you specify it manually or was it calculated automatically?

Index inlining documentation:
https://apacheignite-sql.readme.io/docs/create-index#section-index-inlining

Denis

ср, 26 дек. 2018 г. в 18:27, Prasad Bhalerao :

> Hi,
>
> I have set sqlIndexMaxInline size in cache configuration level as follows.
>
> cacheCfg.setSqlIndexMaxInlineSize(100);
>
> But still  I am getting following warning message in log.
>
> WARN  o.a.i.i.p.q.h2.database.H2TreeIndex -  Indexed
> columns of a row cannot be fully inlined into index what may lead to
> slowdown due to additional data page reads, increase index inline size if
> needed (use INLINE_SIZE option for CREATE INDEX command,
> QuerySqlField.inlineSize for annotated classes, or QueryIndex.inlineSize
> for explicit QueryEntity configuration) [cacheName=USERACCOUNTDATA,
> tableName=USER_ACCOUNT_CACHE, idxName=USER_ACCOUNT_IDX4,
> idxCols=(SUBSCRIPTIONID, UNITID, _KEY, AFFINITYID), idxType=SECONDARY,
> curSize=10, recommendedInlineSize=83]
>
> 1) Is it necessary to set inline size using @QuerySqlField annotation?
>
> 2) How do I set inline size in case of group index in following case?
> Do I need to set inline property inside each @QuerySqlField annotation?
>
> public class Person implements Serializable {
>   /** Indexed in a group index with "salary". */
>   @QuerySqlField(orderedGroups={@QuerySqlField.Group(
> name = "age_salary_idx", order = 0, descending = true)})
>   private int age;
>
>   /** Indexed separately and in a group index with "age". */
>   @QuerySqlField(index = true, orderedGroups={@QuerySqlField.Group(
> name = "age_salary_idx", order = 3)})
>   private double salary;
> }
>
>
>
>
>
>
>
> Thanks,
> Prasad
>


Re: Java Thin Client support to deploy services on a cluster

2018-12-20 Thread Denis Mekhanikov
Zaheer,

Currently thin client doesn't support service-related operations.
Regular client or server node should be used for service deployment.

Denis

чт, 20 дек. 2018 г. в 11:03, Zaheer :

> Hi,
>
> I saw from documentation about Thin clients used to create, destroy caches.
> I wanted to know if there is any way to deploy services on my cluster
> through thin client. (Service implementation already placed in libs).
>
> I went through Javadoc of Ignite Client and found no method related to
> services.
>
> If it was a thick client we could have used ignite.services.deploy() right
> ?
> So I need to know if thin client supports such deployment of services.
>
> Thanks in advance.
>
> Regards
> Zaheer.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to Write and Read a Nested JSON Into and From Apache Ignite Table

2018-12-17 Thread Denis Mekhanikov
Hareesh,

Are the object supposed to have the same schema?
If yes, then JSONs may be converted to BinaryObjects.
If no, then you'll have to store them as blobs or text.

If you want to access such data from SQL, your tables
should have a flat structure. You may have nested binary
objects, but in SQL they will look flat anyway.
Take a look at the following documentation page:
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-indexing-nested-objects

Denis

пн, 17 дек. 2018 г. в 10:00, HareeshShasrtyMJ :

> Hello All,
>
> For one of our big data platform, we are evaluating to have Apache Ignite
> to
> be the data store that is capable of providing,
> 1. in-memory processing capabilities for performance,
> 2. ANSI SQL-99 interface
> 3. Standing up a Dimensional Data Model (Star Schema OR Snowflake) on the
> Apache Database,
> 4. Provide Interface to DB Visualization Tools viz., Tableau and Power BI,
> and
> 5. Ability to Write and Read a Nested JSON Into and From Apache Ignite
> Table
>
> We have successfully evaluated all the four aforementioned features
> required
> for our platform. However, are currently stuck with the 5 requirement as we
> are unable to insert Nested JSON into Apache Ignite table using Scala
> DataFrame...we are able to load simple JSON though.
>
> We also followed the Ignite Documentation and articles on blogs that
> suggested use of Binary Objects.
> However, on evaluation, we found that even Binary Objects do not support
> nested levels.
>
> https://issues.apache.org/jira/browse/IGNITE-6265
> https://issues.apache.org/jira/browse/IGNITE-6266
>
> https://stackoverflow.com/questions/45674684/is-apache-ignite-suited-for-nosql-schema
>
> Hence, any support or direction or workarounds with corresponding examples
> on how to handle Nested JSON with Apache Ignite would be of great help and
> Appreciated.
>
> Thank you all in advance.
>
>
>
> -
>
> -
> Thanks & Regards,
> Hareesh
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Expose service deployed on service node in ignite via REST

2018-12-13 Thread Denis Mekhanikov
Zaheer,

There is an embedded REST processor in Ignite.
It doesn't have a direct way to call service methods.
But you can execute compute tasks via REST, and you may use it to call
services.

Another option is to write a custom REST controller and write your custom
logic in it.
That's what you were recommended on the forum.

Denis

чт, 13 дек. 2018 г. в 11:54, Zaheer :

> Hi Denis,
>
> Thanks for the reply. I will look into that section about executing the
> compute task from REST.
>
> But I require to call the service via REST. Some forum posts suggested
> using
> jetty inside *init()* method of service implementation. I would like to
> know
> more about exposing the service via REST.
>
> Zaheer
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Expose service deployed on service node in ignite via REST

2018-12-13 Thread Denis Mekhanikov
Zaheer,

Does it have to be a service? Won't a compute task be enough?
You can use an execute REST command
 to run a
compute task on a grid.
You may either modify the cache or call a service from this task.

Denis

чт, 13 дек. 2018 г. в 09:45, Zaheer :

> Hi,
>
> In Ignite cluster, I have a cache data node and a service node with a
> deployed service that performs some actions on the cache. My aim is to use
> the methods of this deployed service through REST. How to expose my
> deployed
> service's methods to outside world like browsers, or other REST clients ?
>
> I had a look at the REST api of Ignite and I only found methods to access
> the cache and modify it. But I do not want to use those methods to modify
> my
> cache. Instead I want to use my service's methods to modify my cache.
> kindly
> reply.
>
> Thanks in advance.
>
> Regards,
> Zaheer.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL Index Payload size?

2018-12-11 Thread Denis Mekhanikov
Jose,

There is no such metric currently.
The only way to understand it now is to try loading data with and without
indices,
and compare results.

There is an enhancement proposal, that covers this functionality:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-29%3A+SQL+management+and+monitoring
JIRA ticket: https://issues.apache.org/jira/browse/IGNITE-6477

Denis

сб, 8 дек. 2018 г. в 08:38, joseheitor :

> Ho do I find the payload size of an index?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Spring ThreadPoolTaskScheduler default behaviour changed

2018-12-11 Thread Denis Mekhanikov
Andrey,

Thanks for the info!

Denis

вт, 11 дек. 2018 г. в 14:02, Andrey Davydov :

> Hello,
>
> When I update Ignite from 2.6 to 2.7, I have to update Spring to
> corresponding version too (from 4.16 to 4.18). And I got some exceptions on
> application stop (org.apache.ignite.internal.
> IgniteInterruptedCheckedException  on backplane scheduled queries).
>
> As I find with debugger, in current version of spring, sheduler do not
> wait for job completion by default. So if I set property to wait completion
> everething becomes ok.
>
> 
> class="org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler">
> **
> 
> 
>
>


Re: Cluster High-Availability

2018-12-11 Thread Denis Mekhanikov
Johe,

You can configure a corresponding backup filter for your cache affinity
function.
Use the following method to specify it in a cache config:
RendezvousAffinityFunction.html#setAffinityBackupFilter


Refer to the following thread for more details:
http://apache-ignite-users.70518.x6.nabble.com/How-to-use-BackupFilter-to-assign-all-backups-to-different-machine-groups-td6513.html

Denis

вт, 11 дек. 2018 г. в 13:56, summasumma :

> Following the thread for same question!
>
> Note: I had a similar question in the following thread for some inputs:
>
> http://apache-ignite-users.70518.x6.nabble.com/Geo-redundancy-support-in-Ignite-tt25477.html
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: control.sh --baseline do not work after update to 2.7

2018-12-11 Thread Denis Mekhanikov
Yuriy,

Do you see any exceptions in log of control.sh script or cluster nodes,
when you run the baseline command?

Try specifying --host and --port parameters explicitly.

Denis

вт, 11 дек. 2018 г. в 12:56, Yuriy :

> Hello.
>
> After updating from 2.6 to 2.7 control.sh --baseline can not connect to the
> cluster.
>
> [root@ignat3 apache-ignite-2.7.0-bin]# sh bin/control.sh --baseline
> Control utility [ver. 2.7.0#20181130-sha1:256ae401]
> 2018 Copyright(C) Apache Software Foundation
> User: root
>
> 
> Connection to cluster failed.
> Error: Failed to communicate with grid nodes (maximum count of retries
> reached).
>
> But --state worked:
> [root@ignat3 apache-ignite-2.7.0-bin]# sh bin/control.sh --state
> Control utility [ver. 2.7.0#20181130-sha1:256ae401]
> 2018 Copyright(C) Apache Software Foundation
> User: root
>
> 
> Cluster is active
>
> What could be the cause?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Long activation times with Ignite persistence enabled

2018-11-05 Thread Denis Mekhanikov
Naveen,

40 caches is quite a lot. It means, that Ignite needs to handle 40 *
(number of partitions) files.
By default each cache has 1024 partitions.
This is quite a lot, and a disk is the bottleneck here. Changing of thread
pool sizes won't save you.
If you divide your caches into cache groups, then they will share the same
partitions, thus number of files will be reduced.
You can also try reducing the number of partitions, but it may lead to
uneven distribution of data between nodes.
Any of these changes will require reloading of the data.

You can record a *dstat* on the host machine to make sure, that disk is the
weak place.
If its utilization is high, while CPU is not used, then it means, that you
need a faster disk.

Denis


пн, 5 нояб. 2018 г. в 17:10, Naveen :

> Hi Denis
>
> We have only 40 caches in our cluster.
> If we introduce grouping of caches, guess we need to reload the data from
> scratch, right ??
>
> We do have very powerful machines as part of cluster, they are 128 CPU very
> high end boxes and huge resources available, by increasing any of the below
> thread pools, can we reduce the cluster activation time.
>
> System Pool
> Public Pool
> Striped Pool
> Custom Thread Pools
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there Ignite Eviction low water mark

2018-11-05 Thread Denis Mekhanikov
Hi!

Ignite 2.x has a mechanism called page eviction
.
It's configured using DataRegion#pageEvictionMode

.
Page eviction removes entries from a data region until either
DataRegionConfiguration#evictionThreshold

is
reached,
or DataRegionConfiguration#emptyPagesPoolSize

pages
are available in the free list.
It's applied only when persistence is disabled. Otherwise data is just
spilled to disk.

Ignite 1.x has a different kind of eviction, since it doesn't have page
memory nor data regions.
It removes data until occupied memory is bellow LruEvictionPolisy#maxSize

.
This is similar to on-heap eviction policy

in Ignite 2.x, but you don't need to use it
unless you know exactly what you're doing and what an on-heap cache is.

Denis

пт, 2 нояб. 2018 г. в 21:35, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi all,
>
> This is to understand how eviction works in Ignite cache.
>
>
>
> For example, let’s say the LRU eviction policy is set to max 100MB. Then,
> when the cache size reached 100MB, how much of LRU entries will get evicted
> ? Is there any low water mark/percentage ? Like eviction policy will remove
> 20% of the cache, and then let it again reach back to 100MB to clean up
> again.
>
>
>
> Also please confirm whether the behavior is same in Ignite 1.9 vs 2.6.
>


Re: Apache Ignite - Near Cache consistency

2018-11-05 Thread Denis Mekhanikov
Correct.
Strong consistency is guaranteed for atomic caches as well, including near
cache entries.

Denis

пн, 5 нояб. 2018 г. в 11:21, ales :

> Thanks Denis for the answer.
>
> Actually i am using "ATOMIC" atomicity mode (ie no transaction).
> I have been told that it may be linked to the backup synchronicity mode
> (FULL_SYNC would ensure consistency between nodes and eliminate stale
> data).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite - Near Cache consistency

2018-11-02 Thread Denis Mekhanikov
Arnaud,

There is a short note on the following page about near cache consistency:
https://apacheignite.readme.io/docs/near-caches
Here is its text:
> Near caches are fully transactional and get updated or invalidated
automatically whenever the data changes on the servers.

Near cache entries take part in the transactions, just like regular
backups. So ACID consistency is preserved.

Denis

пт, 2 нояб. 2018 г. в 16:17, Arnaud Lescaroux :

> Hello,
>
> I am evaluating Apache Ignite to check if it fits our company's need. So
> far so good. Now i am trying to understand how the near cache feature works
> in terms of consistency.
>
> We currently have several micro-services with one Ignite configured in
> client mode in each. All these instances are connected to several Ignite
> servers in a cluster. For some use cases (reads>>>writes) it seems
> reasonable to use a near cache in front of the cache servers. I have
> checked and it seems to automatically invalidate "stale data" on all
> instances in case of write, which is good.
>
> My question: is there any documentation beside the one on the official
> site that explains how it works? In particular i would like to understand
> if any subsequent read requests (after the write one) to any other
> instances will get the updated data (no eventual consistency).
>
> Thanks!
>
>


Re: Distributed Priority QUEUE

2018-11-02 Thread Denis Mekhanikov
You can take an implementation of a regular heap data structure
 and replace an array
with an Ignite cache.
There are a few point to take into account though:

   - The cache should be transactional and each operation should happen
   inside a transaction. Otherwise concurrent operations will break the heap
   structure.
   - You should make sure, that all keys are accessed in a certain order,
   e.g. by increasing key value. This is important to avoid deadlocks.
   You should calculate the values, that you are going to need during the
   transaction, and get their values after that in a sorted order. This way
   deadlocks will be eliminated.

Each modification operation will require O(log N) cache operations, where N
is the number of values stored in the cache.
Values may be stored in batches to minimize the number of entries to be
accessed under a single operation.
Don't make the batches too big though. Because if you do, then it will
increase contention and more unneeded information will be transferred over
network during each transaction.

Another option is to have a replicated regular priority queue stored as a
single value.
This priority queue would contain a set of keys, that may be used to access
a cache to get the actual values.
This approach may be used only if you have a small set of keys. Otherwise
the priority queue will grow huge, and storing it as a single value will be
unreasonable.
Also it will create a contention point between concurrent operations.

And the last option that I see would be to store values in a cache, and
make each node have its own local priority queue on keys.
The priority queue may be updated on continuous query
 events, triggered
in the cache.
This is also applicable only for key sets, that may fit into memory of each
node.
And queue updates may be delayed, so you may get inconsistent results.

Denis

пт, 2 нояб. 2018 г. в 13:00, Luckyman :

> how we can implement distributed priority QUEUE with apache ignite?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IGNITE_EXCHANGE_HISTORY_SIZE value for Ignite client

2018-11-01 Thread Denis Mekhanikov
Cristian,

I don't see any critical consequences, if you set
IGNITE_EXCHANGE_HISTORY_SIZE to 0 on clients.
Exchange futures just will be removed after each exchange. By default they
are removed only once in a while.
Don't do it on server nodes though. Coordinator needs exchange history.

Your problem sounds similar to the following issues: IGNITE-7918
, IGNITE-7319
.
Try updating to one of the nightly builds:
https://ignite.apache.org/download.cgi#nightly-builds
Or you can wait for Ignite 2.7. Its release date is right around the corner.

Denis

ср, 31 окт. 2018 г. в 21:49, Cristian Bordei :

> Hello,
>
> We are using Ignite 2.6.0 and we notice on our java app configured as
> Ignite client that the memory is increasing too much (approx 5GB). We
> succeeded to fix this by setting the value of
> IGNITE_EXCHANGE_HISTORY_SIZE to a lower size than 1000.
>
> My question is which will be the impact if we set the value
> of IGNITE_EXCHANGE_HISTORY_SIZE to 0 on the java clients side.
>
> Our configuration is as follow:
> - two Ignite servers in cluster mode. 128 GB ram and 8 core cpu
> - 20 java apps configured as Ignite clients.
> IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setClientMode(true);
>
> Our use case suppose to destroy all 400 caches every day and recreate
> another ones back again. The reuse of the cache is not possible.
>
> Thank you,
> Cristi
>
> Sent from Yahoo Mail on Android
> 
>


Re: What is the username of ignite web console

2018-11-01 Thread Denis Mekhanikov
Justin,

The first registered user becomes an admin.
And he/she may grant admin rights to other users after they register.

Denis

чт, 1 нояб. 2018 г. в 15:03, wt :

> click register and enter your details and it will let you in
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: An error occured when recreate cache

2018-11-01 Thread Denis Mekhanikov
Could you describe steps to reproduce the issue in more details?
What is the cluster topology and sequence of operations?

Denis

чт, 25 окт. 2018 г. в 6:50, Justin Ji :

> The following is the stack:
>
> 2018-10-25 03:47:02:992 [exchange-worker-#42] ERROR
> o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture:498 - Failed to
> reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], discoEvt=DiscoveryCustomEvent
> [customMsg=ChangeGlobalStateMessage
> [id=5491559a661-f481a0dd-41dd-400f-9f16-78f1770bd576,
> reqId=c18dfcb8-3874-48df-b88a-280270f3b2db,
> initiatingNodeId=688cb8a6-b5b0-45e6-86d3-ad80d8dff395, activate=true,
> baselineTopology=BaselineTopology [id=0, branchingHash=2124742757,
> branchingType='New BaselineTopology',
> baselineNodes=[2e713072-c9fd-4240-8e39-90751710e222]],
> forceChangeBaselineTopology=false, timestamp=1540439222186],
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=ZookeeperClusterNode
> [id=688cb8a6-b5b0-45e6-86d3-ad80d8dff395, addrs=[172.17.42.1, 10.0.200.5,
> 127.0.0.1], order=1, loc=true, client=false], topVer=1, nodeId8=688cb8a6,
> msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=154043997]],
> nodeId=688cb8a6, evt=DISCOVERY_CUSTOM_EVT]
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:657)
> ~[?:1.8.0_171]
> at java.util.ArrayList.get(ArrayList.java:433) ~[?:1.8.0_171]
> at
>
> org.apache.ignite.internal.processors.cache.CacheGroupContext.singleCacheContext(CacheGroupContext.java:385)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:198)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.getOrCreatePartition(GridDhtPartitionTopologyImpl.java:812)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions(GridDhtPartitionTopologyImpl.java:368)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:543)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1141)
> ~[ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:712)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
> [ignite-core-2.6.0.jar:2.6.0]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
> [ignite-core-2.6.0.jar:2.6.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-2.6.0.jar:2.6.0]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> 2018-10-25 03:47:02:993 [exchange-worker-#42] INFO
> o.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture:478 - Finish exchange
> future [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> resVer=null, err=java.lang.IndexOutOfBoundsException: Index: 0, Size: 0]
> 2018-10-25 03:47:02:996 [exchange-worker-#42] ERROR
> o.a.i.i.p.c.GridCachePartitionExchangeManager:498 - Failed to wait for
> completion of partition map exchange (preloading will not start):
> GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryCustomEvent
> [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], super=DiscoveryEvent [evtNode=ZookeeperClusterNode
> [id=688cb8a6-b5b0-45e6-86d3-ad80d8dff395, addrs=[172.17.42.1, 10.0.200.5,
> 127.0.0.1], order=1, loc=true, client=false], topVer=1, nodeId8=688cb8a6,
> msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=154043997]],
> crd=ZookeeperClusterNode [id=688cb8a6-b5b0-45e6-86d3-ad80d8dff395,
> addrs=[172.17.42.1, 10.0.200.5, 127.0.0.1], order=1, loc=true,
> client=false], exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=ZookeeperClusterNode
> [id=688cb8a6-b5b0-45e6-86d3-ad80d8dff395, addrs=[172.17.42.1, 10.0.200.5,
> 127.0.0.1], order=1, loc=true, client=false], topVer=1, nodeId8=688cb8a6,
> msg=null, type=DISCOVERY_CUSTOM_EVT, tstamp=154043997]],
> nodeId=688cb8a6, evt=DISCOVERY_CUSTOM_EVT], added=true,
> 

Re: Long activation times with Ignite persistence enabled

2018-11-01 Thread Denis Mekhanikov
Naveen,

How many caches do you have?
As Alexey mentioned, usage of cache groups
 could reduce the number
of created partitions and improve the startup time.

Denis

сб, 27 окт. 2018 г. в 11:12, Naveen :

> Do we have any  update long activation times ?
>
> I too face the same issue, am using 2.6.
>
> Cluster with 100 GB of disk size, got activated in 5 minutes, and when I
> tried with a cluster which has 3 TB is taking close to an hour.
>
> Is it the expected behavior OR some configuration I am missing here
>
> Thanks
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Durable memory with native persistence isssue

2018-11-01 Thread Denis Mekhanikov
What version of Ignite do you use?

Did you try checking the number of loaded entries using a cache API, but
not SQL?
For example, you can iterate over all stored entries using a scan query
.

Denis

пт, 26 окт. 2018 г. в 5:02, debashissinha :

> I am using tpcds to to benchmark ignite with above configuration . I
> generate
> 1gb of data for the table store_sales and then manually create the table
> and
> indexes . Then from sql line I use copy from statement to load the data
> file
> into the table. As per the tpcds matrix the no of rows processed is correct
> but when I am doing a count(*) on the same table the record count is very
> less.
>
> It shows me record processed is 2880404
> Whereas count(*) gives me only 18000 rows.
>
> Thanks & Regards
> Debashis Sinha
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: spring XML configuration to connect ignite database using org.apache.ignite.IgniteJdbcThinDriver

2018-11-01 Thread Denis Mekhanikov
Malashree,

Refer to the following page for information about Ignite thin JDBC driver:
https://apacheignite-sql.readme.io/docs/jdbc-driver
If you need information on how to use JDBC from Spring, refer to Spring
documentation:
https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#jdbc

Denis

ср, 31 окт. 2018 г. в 16:28, Malashree :

> spring XML configuration to connect ignite database using
> org.apache.ignite.IgniteJdbcThinDriver
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cache Memory Size Information

2018-10-30 Thread Denis Mekhanikov
Take a look at the following documentation page:
https://apacheignite.readme.io/docs/memory-metrics
You can use DataRegionMetrics#getPhysicalMemorySize

metric
to approximate the volume of used off-heap memory.
Note, that memory metrics should be explicitly enabled
,
otherwise you will see 0s.

Heap usage may be monitored using JVM-specific tools. For example, it's
available in JMX beans, provided by the JVM.

Denis

вт, 30 окт. 2018 г. в 12:46, Stephen Darlington <
stephen.darling...@gridgain.com>:

> The capacity planning page on the documentation site has a good starting
> point:
>
> https://apacheignite.readme.io/docs/capacity-planning
>
> Regards,
> Stephen
>
> On 30 Oct 2018, at 06:35, Hemasundara Rao <
> hemasundara@travelcentrictechnology.com> wrote:
>
> Hi ,
> I am looking for cache memory size statics using Ignite C#.net API.
> Can someone let me know how to get the stats of the cache like how large
> (like how many MB/GB) is the cache in memory (both on and off heap)?
>
> Currently, I can only get number of entries in the cache using the
> cache.GetSize()
>
> cache.GetMetrics() returns the Cache Metrics but doesn't have any
> information about total size in gb/mb of the cache in memory.
>
> Regards,
> Hemasundara Rao Pottangi  | Senior Project Leader
>
> [image: HotelHub-logo]
> HotelHub LLP
> Phone: +91 80 6741 8700
> Cell: +91 99 4807 7054
> Email: hemasundara@hotelhub.com
> Website: www.hotelhub.com 
> --
>
> HotelHub LLP is a service provider working on behalf of Travel Centric
> Technology Ltd, a company registered in the United Kingdom.
> DISCLAIMER: This email message and all attachments are confidential and
> may contain information that is Privileged, Confidential or exempt from
> disclosure under applicable law. If you are not the intended recipient, you
> are notified that any dissemination, distribution or copying of this email
> is strictly prohibited. If you have received this email in error, please
> notify us immediately by return email to
> noti...@travelcentrictechnology.com and destroy the original message.
> Opinions, conclusions and other information in this message that do not
> relate to the official business of Travel Centric Technology Ltd or
> HotelHub LLP, shall be understood to be neither given nor endorsed by
> either company.
>
>
>
>


Re: How to start an Ignite cluster with fixed number of servers?

2018-09-18 Thread Denis Mekhanikov
Ray,

You can use SSL authentication to prevent nodes, that don't have a
corresponding certificate, from connecting to the cluster.
https://apacheignite.readme.io/docs/ssltls

Denis

вт, 18 сент. 2018 г. в 7:49, Ray :

> Let's say I want to start an Ignite cluster of three server nodes with
> fixed
> IP address and prevent other servers joining cluster as Ignite server
> nodes,
> how can I do that?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Service not found for a deployed service

2018-09-14 Thread Denis Mekhanikov
Did you manage to reproduce the problem?

Denis

пн, 6 авг. 2018 г. в 14:39, Calvin KL Wong, CLSA :

> Hi Denis,
>
>
>
> >> Did Service#init() method throw any exceptions?
>
> No, we didn’t see any exception.
>
>
>
> I will get a stack trace if I can reproduce it.
>
>
>
> Thanks,
>
> Calvin
>
>
>
> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
> *Sent:* Friday, August 03, 2018 8:08 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Service not found for a deployed service
>
>
>
> Calvin,
>
>
>
> Did Service#init() method throw any exceptions?
>
> If so, then you would see the same problem.
>
>
>
> Denis
>
>
>
> пт, 3 авг. 2018 г. в 14:13, Calvin KL Wong, CLSA  >:
>
> Actually, we deployed the service at 02:00, and didn’t use the service
> until 16:27 the next day.  Then got the error.  There were more than 12
> hours in between.
>
> Does this still seem related to IGNITE-1478
> <https://issues.apache.org/jira/browse/IGNITE-1478> ?
>
> If not, can you think of other possible reason?
>
>
>
> Thanks,
>
> Calvin
>
>
>
> *From:* Denis Mekhanikov [mailto:dmekhani...@gmail.com]
> *Sent:* Thursday, August 02, 2018 11:45 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Service not found for a deployed service
>
>
>
> Calvin,
>
>
>
> You have this problem due to the following issue: IGNITE-1478
> <https://issues.apache.org/jira/browse/IGNITE-1478>
>
> A workaround here would be to retry method execution with some delay.
>
>
>
> This problem should be fixed under IEP-17
> <https://cwiki.apache.org/confluence/display/IGNITE/IEP-17%3A+Oil+Change+in+Service+Grid>,
> which is in progress right now.
>
>
>
> Denis
>
>
>
> чт, 2 авг. 2018 г. в 14:33, Calvin KL Wong, CLSA  >:
>
> Hi,
>
>
>
> I deployed a service from a client node to our grid using the following
> code:
>
>
>
> IgniteCluster cluster = ignite.cluster();
>
> ClusterGroup group = cluster.forAttribute(…);
>
> Ignite.services(workerGroup).deployClusterSingleton(“blaze/hsbc”)
>
>
>
> It is fine most of the time.  However we just encountered a case where we
> got an exception when some logic tried to use this service:
>
>
>
> 2018-08-02 16:27:57.713 processors.task.GridTaskWorker [sys-#29%mlog%]
> ERROR - Failed to obtain remote job result policy for result from
> ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl
> [job=C2 [c=ServiceProxyCallable [mtdName=execute, svcName=blaze/hsbc,
> ignite=null]], sib=GridJobSiblingImpl
> [sesId=f66f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9,
> jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9,
> nodeId=236a47e9-7fdb-464e-be44-b24d0942d75c, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=236a47e9-7fdb-464e-be44-b24d0942d75c,
> addrs=[10.23.8.165], sockAddrs=[zhkdlp1712.int.clsa.com/10.23.8.165:0],
> discPort=0, order=37, intOrder=27, lastExchangeTime=1533148447088,
> loc=false, ver=2.3.0#20180518-sha1:02cf6abf, isClient=true], ex=class
> o.a.i.IgniteException: Service not found: blaze/hsbc, hasRes=true,
> isCancelled=false, isOccupied=true]
>
> org.apache.ignite.IgniteException: Remote job threw user exception
> (override or implement ComputeTask.result(..) method if you would like to
> have automatic failover for this exception).
>
> at
> org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6663)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
> [liquid-logic.jar:2.0.10]
>
>

Re: Need help for setting offheap memory

2018-09-14 Thread Denis Mekhanikov
So, Amol,

Did you look at the heap dump?

Denis

пн, 6 авг. 2018 г. в 18:46, Amol Zambare :

> Hi Alex,
>
> Here is the full stack trace
>
> [INFO][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Finished serving
> remote node connection
> [INFO][tcp-disco-sock-reader-#653][TcpDiscoverySpi] Started serving remote
> node connection
> [SEVERE][tcp-disco-sock-reader-#130][TcpDiscoverySpi] Runtime error caught
> during grid runnable execution: Socket reader [id=313,
> name=tcp-disco-sock-reader-#130,
> nodeId=35a7ca47-3245-4f9f-8114-9b65c6d5e9bf]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at java.util.Arrays.copyOf(Arrays.java:3332)
> at
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
> at
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:596)
> at java.lang.StringBuilder.append(StringBuilder.java:190)
> at
> java.io.ObjectInputStream$BlockDataInputStream.readUTFSpan(ObjectInputStream.java:3450)
> at
> java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3358)
> at
> java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:3170)
> at
> java.io.ObjectInputStream.readString(ObjectInputStream.java:1850)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1527)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
> at
> org.apache.ignite.internal.util.IgniteUtils.readMap(IgniteUtils.java:5146)
> at
> org.apache.ignite.spi.discovery.tcp.internal.TcpDiscoveryNode.readExternal(TcpDiscoveryNode.java:617)
> at
> java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:2063)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2012)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:423)
> at java.util.ArrayList.readObject(ArrayList.java:791)
> at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2123)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
> at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2232)
> at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2156)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2014)
> at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1536)
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
> thread for connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted incoming
> connection
> [INFO][tcp-disco-sock-reader-#654][TcpDiscoverySpi] Started serving remote
> node connection
> [INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning a new
> thread for connection
>
> Thanks,
> Amol
>
>
>
> On Sat, Aug 4, 2018 at 3:28 AM, Alex Plehanov 
> wrote:
>
>> Offheap and heap memory regions are used for different purposes and can't
>> replace each other. You can't get rid of OOME in heap by increasing offheap
>> memory.
>> Can you provide full exception stack trace?
>>
>> 2018-08-03 20:55 GMT+03:00 Amol Zambare :
>>
>>> Thanks Alex and Denis
>>>
>>> We have configured off heap memory to 100GB and we have 10 nodes ignite
>>> cluster. However when we are running spark job we see following error in
>>> the ignite logs. When we run the spark job heap utilization on most of the
>>> ignite nodes is increasing significantly though we are using off heap
>>> storage. We have set JVM heap size on each ignite node to 50GB. Please
>>> suggest.
>>>
>>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>>  at java.util.Arrays.copyOf(Arrays.java:3332)
>>>  at
>>> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>>>
>>>
>>> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov 
>>> wrote:
>>>
  "Non-heap 

Re: ScanQuery throwing Exception for java Thin client while peerclassloading is enabled

2018-08-30 Thread Denis Mekhanikov
The client protocol doesn't imply P2P class loading.
So, you need to provide the implementation of the scan query on the server
side.
P2P class loading works only for regular nodes, server or client as well.

Denis

чт, 30 авг. 2018 г. в 9:14, Saby :

> ScanQuery is reporting the following exception while trying to fetch the
> data
> from remote Ignite Server through the Java thing client keeping the
> peerClassLoading is enabled, But SqlQuery is fetching the correct result
> and
> is sending it back to the client.
> If Lamda expression is used as filter in the ScanQuery then Ignite Server
> reporting "java.lang.IllegalArgumentException: Invalid lambda
> deserialization"
>
> Is there any way to use ScanQuery to fetch data from remote server using
> Thin Client without copying jar(s) to the remote server(s)?
>
> public class Predicate implements IgniteBiPredicate
> {
> public boolean apply(K e1, V e2) {
> return true;
> }
> }
>
>
> [10:48:39,126][SEVERE][client-connector-#65][ClientListenerNioListener]
> Failed to process client request
>
> [req=o.a.i.i.processors.platform.client.cache.ClientCacheScanQueryRequest@504d875d
> ]
> class org.apache.ignite.binary.BinaryInvalidTypeException:
> XXX.datamigration.imdg.abstractionlayer.connections.impl.ignite.Predicate
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:697)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1755)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserialize(BinaryObjectImpl.java:640)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheScanQueryRequest.createFilter(ClientCacheScanQueryRequest.java:126)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheScanQueryRequest.process(ClientCacheScanQueryRequest.java:92)
> at
>
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:57)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:160)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:44)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.ClassNotFoundException:
> XXX.datamigration.imdg.abstractionlayer.connections.impl.ignite.Predicate
> at java.net.URLClassLoader.findClass(Unknown Source)
> at java.lang.ClassLoader.loadClass(Unknown Source)
> at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
> at java.lang.ClassLoader.loadClass(Unknown Source)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Unknown Source)
> at
> org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8608)
> at
>
> org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:349)
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:688)
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can Ignite.getOrCreateCache(CacheConfiguration) return null ?

2018-08-29 Thread Denis Mekhanikov
What does the *getOrCreateCache* method do internally?

Denis

ср, 29 авг. 2018 г. в 5:52, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi all,
>
> Is there any possibility for this to happen ? We’re using Ignite 1.9.0
>
>
>
> Following is the code we use to obtain the cache. And we call this line
> for every cache operation(unintentionally), but wanted to know if following
> line can return a null cache instance under any circumstance.
>
>
>
> Cache cache =
> getOrCreateCache(CACHE_NAME,CreatedExpiryPolicy.factoryOf(new
> Duration(TimeUnit.SECONDS, 300)));
>
>
>
>
>


Re: keep old cache value than new value

2018-08-29 Thread Denis Mekhanikov
If you use a cache store, and change values in the underlying database,
then they won't be propagated to Ignite, and old values will be used.
There is no mechanism, that can notify the cache about updates on the
3rd-party DB.
But if you didn't have this value in the cache, and read-through

is enabled, then it will be loaded on the next access.

Denis


ср, 29 авг. 2018 г. в 12:05, luqmanahmad :

> anyone from anyone would be highly appreciated
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite memory leak

2018-08-17 Thread Denis Mekhanikov
Igor,

The fix for this issue is merged to master. It will be included into Ignite
2.7.
Until then you can use one of the nightly builds, available at
https://ci.ignite.apache.org/viewLog.html?buildId=lastSuccessful=Releases_NightlyRelease_RunApacheIgniteNightlyRelease=artifacts=1

Denis

пн, 6 авг. 2018 г. в 15:01, Denis Mekhanikov :

> Igor,
>
> There really is a memory leak in this place.
> Thank you for the analysis! This is very cool!
>
> I filed a JIRA ticket for it:
> https://issues.apache.org/jira/browse/IGNITE-9196
> Feel free to assign it to yourself and fix it, since you already debugged
> through it.
> If not, then I think, it will be fixed in Ignite 2.7 anyway.
>
> Denis
>
> сб, 4 авг. 2018 г. в 19:11, igor.tanackovic :
>
>> Hi,
>>
>> I want to discuss about potential memory leak in Ignite (I imply potential
>> mem leak as I'm not 100% sure I do something wrong ;))
>>
>> However, believe I found a corner case which triggers memory leak even
>> with
>> the lates stable version. In my case, MapQueryResults fills MapNodeResults
>> map without being removed which leads to out of memory exception or huge
>> gc
>> major collections. I'm pretty sure the issue is in *fetchNextPage* method
>> of
>> MapQueryResult class as this method never returns *true* if you have
>> result
>> set with pageSize*n records.
>>
>> Actually, we had a major issue with memory in our production cluster and
>> the
>> only workaround was to limit all queries bellow page size (currently
>> 1024).
>>
>>
>> Regards,
>> Igor
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: taking more time while reading records.

2018-08-13 Thread Denis Mekhanikov
Shuvendu,

What is your cluster configuration?
How many nodes do you have?
Do you use persistence or 3rd party storage?
How do you read data?

Denis

пн, 13 авг. 2018 г. в 7:43, Shuvendu Das :

> Hi,
>
> We came across a situation where it is taking 10 time more time while
> reading records.
>   Our initial grid size is 390,746 (count)
>   Grid size increased to 7,297,301 (count)
>   Now it is taking 10 times more time.
>
> Regards
>
> Shuvendu
>


Re: Running Spark Job in Background

2018-08-13 Thread Denis Mekhanikov
This is not really an Ignite question. Try asking it on Spark userlist:
http://apache-spark-user-list.1001560.n3.nabble.com/

Running commands with & is a valid approach though.
You can also try using nohup .

Denis

вс, 12 авг. 2018 г. в 5:12, ApacheUser :

> Hello Ignite Team,
>
> I have Spark job thats streams live data into Ignite Cache . The  job gets
> closed as soon as I close window(Linux shell) . The other spark streaming
> jobs I run with "&" at the end of spark submit job and they run for very
> long time untill they I stop or crash due to other factors etc.
>
> Is there any way I can run Spark-Ignite job continuously?
>
> This is my spark submit:
>
> spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0
> --master spark://:7077  --executor-cores x --total-executor-cores x
> --executor-memory Xg --conf spark.driver.maxResultSize=Xg --driver-memory
> Xg
> --conf spark.default.parallelism=XX --conf
> spark.serializer=org.apache.spark.serializer.KryoSerializer   --class
> com...dataload .jar  &
>
>
> Thanks
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: OOM on connecting to Ignite via JDBC

2018-08-07 Thread Denis Mekhanikov
Orel,

Could you show Ignite configuration? I'd like to make sure, that you
configured the JDBC port correctly.

Denis

пн, 6 авг. 2018 г. в 17:54, Orel Weinstock (ExposeBox) :

> This is the correct port - I've set it manually. I've used the same
> configuration with the web console for inserting SQL rows and it works
> great with write-through.
>
> On 6 August 2018 at 16:30, Павлухин Иван  wrote:
>
>> Hi Orel,
>>
>> Are you sure that correct port is used? By default 10800 port is used for
>> JDBC connections. You have 8080 in your command line.
>>
>> The error could be caused by reading unexpected input from server and
>> interpreting it as very huge packet size. Attempt to allocate buffer of
>> such size could simply run to OOME.
>>
>> 2018-08-06 11:56 GMT+03:00 Orel Weinstock (ExposeBox) > >:
>>
>>> I've followed the guide on setting up DBeaver to work with Ignite - I've
>>> set up a driver in DBeaver by selecting a class from the ignite-core jar,
>>> both version 2.6.0
>>>
>>> My cluster is up and running now (e.g. write-through works) that I've
>>> added the MySQL JDBC driver to the (web-console generated) pom.xml's
>>> dependencies, but I still can't connect to Ignite via DBeaver.
>>>
>>> On 6 August 2018 at 11:17, Denis Mekhanikov 
>>> wrote:
>>>
>>>> Orel,
>>>>
>>>> JDBC driver fails on handshake for some reason.
>>>> It fails with OOM when trying to allocate a byte array for the
>>>> handshake message.
>>>> But there is not much data transferred in it. Most probably, message
>>>> size is read improperly.
>>>>
>>>> Do you use matching versions of JDBC driver and Ignite nodes?
>>>>
>>>> Denis
>>>>
>>>>
>>>> вс, 5 авг. 2018 г. в 11:01, Orel Weinstock (ExposeBox) <
>>>> o...@exposebox.com>:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Trying to get an Ignite cluster up and going for testing before taking
>>>>> it to production.
>>>>> I've set up Ignite 2.6 on a cluster with a single node on a Google
>>>>> Cloud Compute instance and I have the web console working as well.
>>>>>
>>>>> I've imported a table from MySQL and re-run the cluster with the
>>>>> resulting Docker image.
>>>>>
>>>>> Querying for the table via the web console proved fruitless, so I've
>>>>> switched to SQLLine (on the cluster itself). Still no cigar:
>>>>>
>>>>> main(SqlLine.java:265)moo@ignite:/home/moo$
>>>>> /usr/share/apache-ignite/bin/sqlline.sh --verbose=true -u
>>>>> jdbc:ignite:thin://127.0.0.1:8080issuing: !connect jdbc:ignite:thin://
>>>>> 127.0.0.1:8080 '' '' org.apache.ignite.IgniteJdbcThinDriverConnecting
>>>>> to jdbc:ignite:thin://127.0.0.1:8080java.lang.OutOfMemoryError: Java
>>>>> heap space at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:586)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:575)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.handshake(JdbcThinTcpIo.java:328)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:223)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.start(JdbcThinTcpIo.java:144)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.ensureConnected(JdbcThinConnection.java:148)
>>>>> at
>>>>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.(JdbcThinConnection.java:137)
>>>>> at
>>>>> org.apache.ignite.IgniteJdbcThinDriver.connect(IgniteJdbcThinDriver.java:157)
>>>>> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:156) at
>>>>> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:204)
>>>>> at sqlline.Commands.connect(Commands.java:1095) at
>>>>> sqlline.Commands.connect(Commands.java:1001) at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>>> at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.inv

Re: Need help for setting offheap memory

2018-08-06 Thread Denis Mekhanikov
Amol,

Data is pulled onto heap every time you use it.
So, if your Spark jobs operate over big amount of data, then heap memory
utilization will be high.
Take a heap dump next time you encounter OutOfMemoryError.
You can make Java take a heap dump every time it fails with OOME:
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html

You'll be able to tell, what causes the failure by analyzing the heap dump.

Denis

сб, 4 авг. 2018 г. в 11:29, Alex Plehanov :

> Offheap and heap memory regions are used for different purposes and can't
> replace each other. You can't get rid of OOME in heap by increasing offheap
> memory.
> Can you provide full exception stack trace?
>
> 2018-08-03 20:55 GMT+03:00 Amol Zambare :
>
>> Thanks Alex and Denis
>>
>> We have configured off heap memory to 100GB and we have 10 nodes ignite
>> cluster. However when we are running spark job we see following error in
>> the ignite logs. When we run the spark job heap utilization on most of the
>> ignite nodes is increasing significantly though we are using off heap
>> storage. We have set JVM heap size on each ignite node to 50GB. Please
>> suggest.
>>
>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>  at java.util.Arrays.copyOf(Arrays.java:3332)
>>  at
>> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>>
>>
>> On Fri, Aug 3, 2018 at 4:16 AM, Alex Plehanov 
>> wrote:
>>
>>>  "Non-heap memory ..." metrics in visor have nothing to do with offheap
>>> memory allocated for data regions. "Non-heap memory" returned by visor
>>> it's JVM managed memory regions other then heap used for internal JVM
>>> purposes (JIT compiler, etc., see [1]). Memory allocated in offheap by
>>> Ignite for data regions (via "unsafe") not included into this metrics. Some
>>> data region related metrics in visor were implemented in Ignite 2.4.
>>>
>>> [1]
>>> https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html
>>>
>>
>>
>


Re: Ignite memory leak

2018-08-06 Thread Denis Mekhanikov
Igor,

There really is a memory leak in this place.
Thank you for the analysis! This is very cool!

I filed a JIRA ticket for it:
https://issues.apache.org/jira/browse/IGNITE-9196
Feel free to assign it to yourself and fix it, since you already debugged
through it.
If not, then I think, it will be fixed in Ignite 2.7 anyway.

Denis

сб, 4 авг. 2018 г. в 19:11, igor.tanackovic :

> Hi,
>
> I want to discuss about potential memory leak in Ignite (I imply potential
> mem leak as I'm not 100% sure I do something wrong ;))
>
> However, believe I found a corner case which triggers memory leak even with
> the lates stable version. In my case, MapQueryResults fills MapNodeResults
> map without being removed which leads to out of memory exception or huge gc
> major collections. I'm pretty sure the issue is in *fetchNextPage* method
> of
> MapQueryResult class as this method never returns *true* if you have result
> set with pageSize*n records.
>
> Actually, we had a major issue with memory in our production cluster and
> the
> only workaround was to limit all queries bellow page size (currently 1024).
>
>
> Regards,
> Igor
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


  1   2   3   4   >