Re: H2database dependency in Apache Ignite

2023-01-20 Thread Andrey Mashenkov
Hi,

Ignite uses H2 as one of 2 available execution engines. Module
'ignite-indexing' depends on H2.
Long story short, we can't bump H2 version anymore as some integration
points were dropped in higher H2 versions.
However, this CVE along with other known issues[1] do not affect the Ignite.

[1] https://issues.apache.org/jira/browse/IGNITE-15241

On Fri, Jan 20, 2023 at 1:06 PM Gianluca Bonetti 
wrote:

> Hello
>
> I am also using Apache Ignite for some projects, but I don't see any
> dependency on h2 in my projects.
> I think h2 dependency is coming from somewhere else.
> Can you run a "mvn dependency:tree" and share the results?
>
> Cheers
> Gianluca
>
> On Fri, 20 Jan 2023 at 09:56, David Cussen 
> wrote:
>
>> Hi,
>>
>> I am an employee in Workday and our team uses Apache Ignite for one of
>> our products. There is a dependency on com.h2database:h2:jar:1.4.197 :
>> https://github.com/apache/ignite/blob/master/parent/pom.xml#L92
>>
>>
>>
>> We are wondering if there is a plan to upgrade this dependency to
>> remediate CVE-2021-42392
>>  and if
>> so, do you have an ETA on when this would be available?
>>
>>
>>
>> Thank you.
>>
>>
>>
>> Kind regards,
>>
>> David Cussen
>>
>> Workday
>>
>>
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Fw:Expiry Policy can not working in gridgain 8.8.9

2022-05-12 Thread Andrey Mashenkov
Hi,
You wrote to the Apache Ignite user list.
Apache Ignite and Gridgain are different projects with their own lifecycles
and roadmaps.

BTW, this looks like an issue in 8.8.9 that is already fixed in 8.8.18.
Please contact the vendor for details.


Re: Ignite IPv6 support

2020-10-09 Thread Andrey Mashenkov
Hi,

As far as I know Ignite can't work with IPv6 on Mac devices.
Also, we have a ticket related to K8s with IPv6. The ticket had a patch and
likely being fixed in one of future releases.
Please, find known issues in jira [1].

Ignite is not tested well in IPv6 environment and any help here will be
appreciated.

[1]
https://issues.apache.org/jira/browse/IGNITE-13204?jql=project%20%3D%20IGNITE%20AND%20resolution%20%3D%20Unresolved%20AND%20text%20~%20%22ipv6%22

On Fri, Oct 9, 2020 at 5:20 PM Vishwas Bm  wrote:

> Hi,
>
> We are using ignite on K8s environment. Now we wanted to move to a K8s
> environment with ipv6 support.
>
> But I read in below document that ignite has some issues on IPv6 cluster:
> https://apacheignite.readme.io/docs/network-config
>
> May I know what are the challenges or issues that are seen with Ignite on
> an IPv6 enabled cluster ?
>
>
> *Thanks & Regards,*
>
> *Vishwas *
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Failed to map SQL query

2020-08-25 Thread Andrey Mashenkov
Hi,

Most likely, the query intermediate result doesn't fit to JVM heap memory.
The query may require all table data fetched before applying sorting.

You can try to create a composite index over "act_id,mer_id,score" columns.



On Tue, Aug 25, 2020 at 8:42 AM 1115098...@qq.com  wrote:

> Hi,an error happened when I run a sql in ignite cluster. Thanks.
>
> Some info as follow:
> -- sql
> -- act_rank  has 5,000,000 rows
> select * from act_rank
> order by act_id,mer_id,score
> limit 100 ;
>
> -- sql error info:
> Error: javax.cache.CacheException: Failed to map SQL query to topology on
> data node [dataNodeId=ca448962-9ce9-4321-82a7-2d12e147f34c, msg=Data node
> has left the grid during query execution
> [nodeId=ca448962-9ce9-4321-82a7-2d12e147f34c]] (state=5,code=1)
> java.sql.SQLException: javax.cache.CacheException: Failed to map SQL query
> to topology on data node [dataNodeId=ca448962-9ce9-4321-82a7-2d12e147f34c,
> msg=Data node has left the grid during query execution
> [nodeId=ca448962-9ce9-4321-82a7-2d12e147f34c]]
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:750)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212)
> at
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:475)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
> -- ignite server error log
> SELECT
> __Z0.ID __C0_0,
> __Z0.ACT_ID __C0_1,
> __Z0.MEM_ID __C0_2,
> __Z0.MER_ID __C0_3,
> __Z0.SHOP_ID __C0_4,
> __Z0.AREA_ID __C0_5,
> __Z0.PHONE_NO __C0_6,
> __Z0.SCORE __C0_7
> FROM PUBLIC.ACT_RANK __Z0
> ORDER BY 2, 4, 8 LIMIT 100 [90108-197]
> at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
> at org.h2.message.DbException.get(DbException.java:168)
> at org.h2.message.DbException.convert(DbException.java:301)
> at org.h2.command.Command.executeQuery(Command.java:214)
> at
> org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:114)
> at
> org.apache.ignite.internal.processors.query.h2.PreparedStatementExImpl.executeQuery(PreparedStatementExImpl.java:67)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1421)
> ... 13 more
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
> at
> org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1457)
> at org.h2.result.LazyResult.hasNext(LazyResult.java:79)
> at org.h2.result.LazyResult.next(LazyResult.java:59)
> at org.h2.command.dml.Select.queryFlat(Select.java:527)
> at org.h2.command.dml.Select.queryWithoutCache(Select.java:633)
> at
> org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114)
> at org.h2.command.dml.Query.query(Query.java:352)
> at org.h2.command.dml.Query.query(Query.java:333)
> at org.h2.command.CommandContainer.query(CommandContainer.java:114)
> at org.h2.command.Command.executeQuery(Command.java:202)
> ... 16 more
>
>
> -- table struct   (total rows:5,000,000)
> CREATE TABLE act_rank(
>   id varchar(50) primary key,
>   act_id VARCHAR(50),
>   mem_id VARCHAR(50),
>   mer_id VARCHAR(50),
>   shop_id VARCHAR(50),
>   area_id VARCHAR(50),
>   phone_no VARCHAR(16),
>   score INT
> );
>
> -- visor info
> visor> cache -c=@c4 -a
> Time of the snapshot: 2020-08-24 11:20:50
>
> ++
> | Name(@)  |Mode | Nodes | Total entries (Heap /
> Off-heap) |  Primary entries (Heap / Off-heap)  |   Hits|  Misses   |
>  Reads   |  Writes   |
>
> ++
> | SQL_PUBLIC_ACT_RANK(@c4) | PARTITIONED | 3 | 500 (0 / 500)
>  | min: 1635268 (0 / 1635268)  | min: 0| min: 0|
> min: 0| min: 0|
> |  | |   |
>  | avg: 166.67 (0.00 / 166.67) | avg: 0.00 | avg: 0.00 |
> avg: 0.00 | avg: 0.00 |
> |  | |   |
>  | max: 1720763 (0 / 1720763)  | max: 0| max: 0|
> max: 0| max: 0|
>
> ++
>
> Cache 'SQL_PUBLIC_ACT_RANK(@c4)':
> 

Re: RE: Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite

2020-03-19 Thread Andrey Mashenkov
Hi,

In Apache Ignite master branch I see a separate class
H2JavaObjectSerializer that implements JavaObjectSerializer.
Seems, this won't be released in 2.8
https://issues.apache.org/jira/browse/IGNITE-12609


On Thu, Mar 19, 2020 at 4:03 PM Andrey Davydov 
wrote:

> Seem that refactor h2serilizer method in following manner will be safe for
> marshallers which not depends on ignite instance and will be faster anyway,
> due to single clsLdr resolving. For binary marshaller solution is still
> unsafe =(((
>
>
>
> private JavaObjectSerializer h2Serializer() {
>
> ClassLoader clsLdr = ctx != null ?
> U.resolveClassLoader(ctx.config()) : null;
>
> return new CustomJavaObjectSerializer(marshaller, clsLdr);
>
> }
>
>
>
> private static final class CustomJavaObjectSerializer implements
> JavaObjectSerializer {
>
> private final Marshaller marshaller;
>
> private final ClassLoader clsLdr;
>
>
>
> CustomJavaObjectSerializer(Marshaller marshaller, ClassLoader
> clsLdr) {
>
> this.marshaller = marshaller;
>
> this.clsLdr = clsLdr;
>
> }
>
>
>
> @Override public byte[] serialize(Object obj) throws Exception {
>
> return U.marshal(marshaller, obj);
>
> }
>
>
>
> @Override public Object deserialize(byte[] bytes) throws Exception
> {
>
> return U.unmarshal(marshaller, bytes, clsLdr);
>
> }
>
> }
>
>
>
> Andrey.
>
>
>
> *От: *Andrey Davydov 
> *Отправлено: *19 марта 2020 г. в 15:43
> *Кому: *user@ignite.apache.org
> *Тема: *RE: Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite
>
>
>
> I have some RnD with Apache Felix this week to found workaround for 
> multi-tenancy
> of H2.
>
>
>
> But there is problem with some Ignites in same JVM.
>
>
>
> As I see in
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing latest
> started Ignite will visible via JdbcUtils.serializer, and it can be
> already closed and it workdir can be deleted.
>
>
>
> Line 2105:
>
>
>
>
>
> if (JdbcUtils.serializer != null)
>
> U.warn(log, "Custom H2 serialization is already configured,
> will override.");
>
>
>
> JdbcUtils.serializer = h2Serializer();
>
>
>
> Line 2268:
>
>
>
> private JavaObjectSerializer h2Serializer() {
>
> return new JavaObjectSerializer() {  *//nested class has link to
> parent IgniteH2Indexing and to ingnite instance transitively*
>
> @Override public byte[] serialize(Object obj) throws Exception
> {
>
> return U.marshal(marshaller, obj); *//In common case,
> binary marshaller logic depends on work dir*
>
> }
>
>
>
> @Override public Object deserialize(byte[] bytes) throws
> Exception {
>
> ClassLoader clsLdr = *ctx* != null ? U.resolveClassLoader(
> *ctx.config()*) : null; *//only configuration need, but all ctx leaked*
>
>
>
> return U.unmarshal(marshaller, bytes, clsLdr);
>
> }
>
> };
>
> }
>
>
>
>
>
> Andrey.
>
>
>
> *От: *Ilya Kasnacheev 
> *Отправлено: *19 марта 2020 г. в 14:37
> *Кому: *user@ignite.apache.org
> *Тема: *Re: Unsafe usage of org.h2.util.JdbcUtils in Ignite
>
>
>
> Hello!
>
>
>
> As far as my understanding goes:
>
>
>
> 1) It is H2's decision to exhibit JdbcUtil.serializer as their public API,
> they have a public system property to override it:
>
>
>
>
>
>
>
>
>
>
> */** * System property h2.javaObjectSerializer * (default: 
> null). * The JavaObjectSerializer class name for java objects being 
> stored in * column of type OTHER. It must be the same on client and server to 
> work * correctly. */**public static final *String *JAVA_OBJECT_SERIALIZER *=
> Utils.*getProperty*(*"h2.javaObjectSerializer"*, *null*);
>
>
>
> Obviously, this is not designed for multi-tenancy of H2 in mind.
>
>
>
> If you really need multi-tenancy, I recommend starting H2 in a separate
> class loader inherited from root class loader and isolated from any Ignite
> classes.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 18 мар. 2020 г. в 18:54, Andrey Davydov :
>
> Hello,
>
>
>
> org.h2.util.JdbcUtils is utility class with all static methods  and
> configured via System.properties. So it system wide resource. It is
> incorrect inject Ignite specific settings in it.
>
>
>
> this - value: org.apache.ignite.internal.IgniteKernal #1
>
> <- grid - class: org.apache.ignite.internal.GridKernalContextImpl,
> value: org.apache.ignite.internal.IgniteKernal #1
>
>   <- ctx - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing, value:
> org.apache.ignite.internal.GridKernalContextImpl #2
>
><- this$0 - class:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing #2
>
> <- serializer - class: org.h2.util.JdbcUtils, value:
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$10 

Re: Geometry via SQL?

2019-12-11 Thread Andrey Mashenkov
Hi,

Please, refer to a Geo spatial index documentation [1]
It is quite old, but should works.

[1]
https://apacheignite.readme.io/v1.7/docs/geospatial-queries#section-executing-geospatial-queries

On Tue, Dec 10, 2019 at 10:57 PM richard.ows...@viasat.com <
richard.ows...@viasat.com> wrote:

> Is there an example of how to use the SQL Interface to user SQL Insert of a
> GEOMETRY Data Type?  The GEOMETRY data type is supported through the SQL
> interface (I can create a table with GEOMETRY field and create the
> necessary
> index), but cannot INSERT a geometry?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: H2 version security concern

2019-12-11 Thread Andrey Mashenkov
Hi,

Mentioned CVE has no affect Ignite.
Please, see discussion on dev-list.

http://apache-ignite-developers.2346864.n4.nabble.com/H2-license-and-vulnerabilities-td40417.html#a40418

On Wed, Dec 11, 2019 at 2:22 AM Evgenii Zhuravlev 
wrote:

> Hi,
>
> There are plans to replace H2 with Calcite. You can read more about it on
> dev list, I've seen several threads regarding this topic there.
>
> Evgenii
>
>
> вт, 10 дек. 2019 г. в 13:29, Sobolevsky, Vladik :
>
>> Hi,
>>
>>
>>
>> It looks like all the recent versions of Apache Ignite ( apache ignite
>> indexing) depends on H2 version 1.4.197.
>>
>> This version has at least 2 CVE’s :
>>
>> https://nvd.nist.gov/vuln/detail/CVE-2018-10054
>>
>> https://nvd.nist.gov/vuln/detail/CVE-2018-14335
>>
>>
>>
>> I do understand that not all above CVE’s can be exploited due to a way
>> Ignite uses H2 but still : Is there any plans to upgrade to version that
>> doesn’t has those ?
>>
>>
>>
>> Thank You,
>>
>> Vladik
>>
>>
>>
>>
>>
>>
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-03-01 Thread Andrey Mashenkov
Hi,

Most likely heap size is too low.
Try to increase Xmx up to 4Gb or higher or avoid G1GC usage on small heaps
as it is very sensitive to free heap memory.

Looks like you have Visor node (or may be web-console) in grid. Is OOM
happened only when Visor attached to grid?

On Fri, Mar 1, 2019 at 7:17 AM James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com> wrote:

> OS: 4C +8GB
> Data Region = 4GB
>
> start command:
> nohup $IGNITE_HOME/bin/ignite.sh -Xmx1024m -XX:+UseG1GC
> -XX:MaxDirectMemorySize=1G grtip-config.xml > ignite.log 2>&1 &
>
> How to adjust the the memory tunning.
>
> [21:29:22,777][SEVERE][mgmt-#33519%234-236-237-241%][GridJobWorker]
> Runtime error caught during grid runnable execution: GridJobWorker
> [createTime=1551360450243, startTime=1551360453071,
> finishTime=1551360483944, taskNode=TcpDiscoveryNode
> [id=a7839266-6396-4f7c-9ef7-c3a4b2355782, addrs=[0:0:0:0:0:0:0:1%lo,
> 127.0.0.1, 192.168.1.236], sockAddrs=[/192.168.1.236:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=56,
> intOrder=31, lastExchangeTime=1551151457630, loc=false,
> ver=2.7.0#20181201-sha1:256ae401, isClient=false], internal=true,
> marsh=BinaryMarshaller [], ses=GridJobSessionImpl [ses=GridTaskSessionImpl
> [taskName=o.a.i.i.v.service.VisorServiceTask, dep=GridDeployment
> [ts=1551094729510, depMode=SHARED,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6,
> clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0,
> loc=true, sampleClsName=o.a.i.i.processors.cache.CachesRegistry,
> pendingUndeploy=false, undeployed=false, usage=0],
> taskClsName=o.a.i.i.v.service.VisorServiceTask,
> sesId=8a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782,
> startTime=1551360450899, endTime=9223372036854775807,
> taskNodeId=a7839266-6396-4f7c-9ef7-c3a4b2355782,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, closed=false,
> cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false,
> internal=true, topPred=ContainsNodeIdsPredicate [],
> subjId=a7839266-6396-4f7c-9ef7-c3a4b2355782, mapFut=IgniteFuture
> [orig=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=327805292]], execName=null],
> jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782],
> jobCtx=GridJobContextImpl
> [jobId=9a35b792961-a7839266-6396-4f7c-9ef7-c3a4b2355782, timeoutObj=null,
> attrs={}], dep=GridDeployment [ts=1551094729510, depMode=SHARED,
> clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6,
> clsLdrId=3aef2742961-5a63d018-0eb4-493e-91a4-be6d41caff85, userVer=0,
> loc=true, sampleClsName=o.a.i.i.processors.cache.CachesRegistry,
> pendingUndeploy=false, undeployed=false, usage=0], finishing=true,
> masterLeaveGuard=false, timedOut=false, sysCancelled=false,
> sysStopping=false, isStarted=true, job=VisorServiceJob [], held=0,
> partsReservation=null, reqTopVer=null, execName=null]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> [21:29:22,781][SEVERE][nio-acceptor-tcp-comm-#28%234-236-237-241%][] JVM
> will be halted immediately due to the failure: [failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=java.lang.OutOfMemoryError: GC overhead limit
> exceeded]]
> [21:29:22,814][SEVERE][grid-timeout-worker-#23%234-236-237-241%][GridTimeoutProcessor]
> Error when executing timeout callback: CancelableTask
> [id=94ef2742961-0742c515-5b96-4aad-b07a-9f1ec60af5f3,
> endTime=1551360514795, period=3000, cancel=false, task=MetricsUpdater
> [prevGcTime=38806, prevCpuTime=2195727,
> super=o.a.i.i.managers.discovery.GridDiscoveryManager$MetricsUpdater@73766331
> ]]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
>
> James Wang / edvance
> Mobile/WeChat: +86 135 1215 1134
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


-- 
Best regards,
Andrey V. Mashenkov


Re: about https://github.com/amsokol/ignite-go-client

2019-01-21 Thread Andrey Mashenkov
Hi,

Ignite has no module ignite-go-client. Client you mentioned is not
maintained by Apache Ignite community.

AFAIK, we have no plans to add GO client support for now.
Feel free to contribute.

Usually, Apache Ignite is released twice a year. The latest version has
been released at December 2018.
So, if someone decides to contribute Go-client module to Ignite in next few
months,
 there are many changes it will be included into next release.

On Sat, Jan 19, 2019 at 5:15 PM 李玉珏@163 <18624049...@163.com> wrote:

> Hi,
>
> With regard to ignite-go-client, I would like to know if the client
> library is being maintained by the official developers?
>
> If it is being maintained by the official, when will it be released?
>
>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Text Query question

2019-01-10 Thread Andrey Mashenkov
Hi,

Unfortunatelly, it doesn't look as an open source solution.

It is not clear how their Indices are integrated with Ignite page memory.
If they do not use Ignite page memory then how they survive in failover
scenarios as no shared test\test results are available?
Otherwise, I bet they have headache each time they merge with new version
of Ignite or Lucene or geo-index.

Anyway, their features are awesome, thanks.


On Wed, Jan 9, 2019 at 10:06 PM Manu  wrote:

> Hi! take a look to
> https://github.com/hawkore/examples-apache-ignite-extensions/ they are
> implemented a solution for persisted lucene and spatial indexes
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Migrate from 2.6 to 2.7

2018-12-11 Thread Andrey Mashenkov
Hi Andrey,

It looks like you try to run "Select" sql query inside explicit transaction.
Please, let us know if it is not true.

This was workable in 2.7 as SQL had no transactional support and query just
ignored transactional context (however "Select for Update" wasn't).
For now, SQL Select query supports transactions for TRANSACTIONAL_SNAPSHOT
cache mode and tries to validate transaction context that causes exception
you have faced.
So, you have to use TRANSACTIONAL_SNAPSHOT atomicity mode as Ivan suggested
or move Select query to outside of transaction.


On Tue, Dec 11, 2018 at 5:53 PM Павлухин Иван  wrote:

> Hi Andrey,
>
> It looks like your persisted data was read incorrectly by upgraded
> Ignite. It would be great if you provide runnable reproducer.
>
> Regarding Optimistic Serializable transactions. They are still
> supported by caches with TRANSACTIONAL atomicity mode. In your error
> it looks like that your caches are treated as TRANSACTIONAL_SNAPSHOT
> atomicity mode. You can read about that (experimental) mode in
> documentation [1]. Briefly, this mode allows SQL transactions and
> supports only PESSIMISTIC REPEATABLE_READ transaction configuration.
>
> [1]
> https://apacheignite-sql.readme.io/v2.7/docs/multiversion-concurrency-control
> пн, 10 дек. 2018 г. в 17:13, Андрей Григорьев :
> >
> > Hello, when I tried to migrate to new version i had error. Optimistic
> Serializable isn't supported?
> >
> >
> > ```
> > Caused by: class
> org.apache.ignite.internal.processors.query.IgniteSQLException: Only
> pessimistic repeatable read transactions are supported at the moment.
> > at
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690)
> > at
> org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671)
> > at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793)
> > at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610)
> > at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315)
> > at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
> > at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
> > at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
> > at
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> > at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
> > at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
> > at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)
> > ... 10 more
> > ```
> > Enviroment: JDK 1.8, Apache Ignite 2.7 (clear install, persistence mode,
> 3 nodes).  Apache Ignite Client 2.7 from maven:
> >
> > 2.7.0
> >
> > 
> > org.apache.ignite
> > ignite-core
> > ${ignite.version}
> > 
> > 
> > org.apache.ignite
> > ignite-indexing
> > ${ignite.version}
> > 
> >
> > Cache configuration:
> >
> > CacheConfiguration cfg = new CacheConfiguration<>();
> > cfg.setBackups(backupsCount);
> > cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> >
> > Transaction configuration:
> >
> > TransactionConfiguration txCfg = new TransactionConfiguration();
> > txCfg.setDefaultTxConcurrency(TransactionConcurrency.OPTIMISTIC);
> > txCfg.setDefaultTxIsolation(TransactionIsolation.SERIALIZABLE);
> > cfg.setTransactionConfiguration(txCfg);
> >
> > And second exception from nodes, when i try to read persisted value from
> cache (set transaction mode to pessimistic repetable).
> >
> > ```
> > [15:33:46,990][SEVERE][query-#278][GridMapQueryExecutor] Failed to
> execute local query.
> > class org.apache.ignite.IgniteCheckedException: Failed to execute SQL
> query. Внутренняя ошибка: "class
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
> Runtime failure on bounds: [lower=RowSimple [vals=[null, null, null, null,
> null, null, null, 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]],
> upper=RowSimple [vals=[null, null, null, null, null, null, null,
> 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]]]"
> > General error: "class
> org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
> Runtime failure on bounds: [lower=RowSimple [vals=[null, null, null, null,
> null, null, null, 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]],
> upper=RowSimple [vals=[null, null, null, null, null, null, null,
> 'e3f070fa-7888-4fbb-baac-a02d5338e217', 1]]]"; SQL statement:
> > SELECT
> > "HumanName".__Z0._KEY __C0_0,
> > "HumanName".__Z0._VAL __C0_1
> > FROM "HumanName".HUMANNAMEMODEL __Z0
> > WHERE (__Z0.PARENTID = ?1) AND (__Z0.VERSION = ?2) 

Re: SQL Query plan confusion

2018-12-07 Thread Andrey Mashenkov
Hi,

The difference in query plans is a 'bad' one use  PUBLIC.TEST_DATA_K_V_ID
index instead of PUBLIC.TEST_DATA_ID_K_V.
Have you tries to remove other indices and left TEST_DATA_ID_K_V only?

It looks weird for me, the 'good' query plan shows unicast requests as ID
is affinity field, but 'bad' says nothing about broadcast.
Are you run query with same flags? Do you have setDistributedJoins=true?

Another possible reason: TEST_DATA_K_V_ID can't be used efficiently as
default inline size is 10, that means only 7 first chars of composite index
will be inlined. You have at least 2 fields with 'trans.c' prefix, so
Ignite may scan a huge number of rows with using this index.
Try to remove it of use Hints to force Ignite use 'efficient' one.

In case you are really need to have 'k' as a first column in composite
index, you can try to increase inline size.
FYI: For now, fixed length columns will use 1 addition byte to inline (for
a column type prefix) and var-len column requires 3 additional bytes for
inline (1 byte for type, 2 bytes for size)


On Fri, Dec 7, 2018 at 11:19 AM Vladimir Ozerov 
wrote:

> Hi Jose,
>
> I am a bit lost in two provided explains. Both "godd" and "bad" contain "k
> = 'trans.cust.first_name'" condition. Could you please confirm that they
> are correct? Specifically, I cannot understand why this condition is
> present in "good" explain.
>
> On Tue, Nov 27, 2018 at 12:06 PM joseheitor 
> wrote:
>
>> 1. - I have a nested join query on a table of 8,000,000 records which
>> performs similar or better than PostreSQL (~10ms) on my small test setup
>> (2x
>> nodes, 8GB, 2CPU):
>>
>> SELECT
>> mainTable.pk, mainTable.id, mainTable.k, mainTable.v
>> FROM
>> public.test_data AS mainTable
>> INNER JOIN (
>> SELECT
>> lastName.id
>> FROM
>> (SELECT id FROM public.test_data WHERE k
>> = 'trans.cust.last_name' AND v
>> = 'Smythe-Veall') AS lastName
>> INNER JOIN
>> (SELECT id FROM
>> public.test_data WHERE k = 'trans.date' AND v =
>> '2017-12-21') AS transDate ON transDate.id = lastName.id
>> INNER JOIN
>> (SELECT id FROM
>> public.test_data WHERE k = 'trans.amount' AND cast(v
>> AS integer) > 9) AS transAmount ON transAmount.id = lastName.id
>> ) AS subTable ON mainTable.id = subTable.id
>> ORDER BY 1, 2
>>
>>
>> 2. - By simply adding a WHERE clause at the end, the performance becomes
>> catastrophic on Ignite (~10s for subsequent queries - first query takes
>> many
>> minutes). On PostgreSQL performance does not change...
>>
>> SELECT
>> mainTable.pk, mainTable.id, mainTable.k, mainTable.v
>> FROM
>> public.test_data AS mainTable
>> INNER JOIN (
>> SELECT
>> lastName.id
>> FROM
>> (SELECT id FROM public.test_data WHERE k
>> = 'trans.cust.last_name' AND v
>> = 'Smythe-Veall') AS lastName
>> INNER JOIN
>> (SELECT id FROM
>> public.test_data WHERE k = 'trans.date' AND v =
>> '2017-12-21') AS transDate ON transDate.id = lastName.id
>> INNER JOIN
>> (SELECT id FROM
>> public.test_data WHERE k = 'trans.amount' AND cast(v
>> AS integer) > 9) AS transAmount ON transAmount.id = lastName.id
>> ) AS subTable ON mainTable.id = subTable.id
>> *WHERE
>> mainTable.k = 'trans.cust.first_name'*
>> ORDER BY 1, 2
>>
>> What can I do to optimise this query for Ignite???
>>
>> (Table structure and query plans attached for reference)
>>
>> Thanks,
>> Jose
>> table.sql
>> 
>> good-join-query.txt
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t1652/good-join-query.txt>
>>
>> bad-join-query.txt
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t1652/bad-join-query.txt>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Slow select distinct query on primary key

2018-11-30 Thread Andrey Mashenkov
Yuri, how did you get inline size 60?
I'd think 55 should be enough to inline Account_ID. 55 = 1 /* byte, type
code */ + 4 /* int, array lenght */ + 50 /* data size for ANSI chars */

On Fri, Nov 30, 2018 at 1:25 PM Юрий  wrote:

> Please provide explain plan of the query to check that index is use. *EXPLAIN
> {your select statement}*
>
> Also I noticed ACCOUNT_ID have length 50. Need to increase of inline index
> size for the index.
>
> Try create index with the follow command *CREATE INDEX PERF_POSITIONS_IDX
> ON PERF_POSITIONS (ACCOUNT_ID) INLINE_SIZE 60;*
>
> чт, 29 нояб. 2018 г. в 16:47, yongjec :
>
>> Hi,
>>
>> I tried the additional index as you suggested, but it did not improve the
>> query time. The query still takes 58-61 seconds.
>>
>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>> CREATE INDEX PERF_POSITIONS_IDX2 ON PERF_POSITIONS (ACCOUNT_ID,
>> EFFECTIVE_DATE, FREQUENCY, SOURCE_ID, SECURITY_ALIAS, POSITION_TYPE);
>>
>>
>> I also tried the single column index only without the composite index.
>> That
>> did not make any difference in query time, either.
>>
>> CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID);
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> Живи с улыбкой! :D
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Cache Structure

2018-11-13 Thread Andrey Mashenkov
Hi,

Yes, to add a new item to a list value of second cache Ignite will have to
deserialize whole list (with all it's items) then add new item and then
serialize list again.
You can try to use BinaryObjects to avoid unnecessary deserialization of
list items [1].
Also note, k2 and k1 data will have different distribution and adding new
instance of T will requires 2 operations (as you mentioned) on different
nodes.


If k2 <- k1 relation is one-to-many, there is another way to achieve the
same with using SQL [2].
With this approach adding new instance will be a single operation on one
node and Ignite will need just to update local index in addition, but query
for k2 will be a broadcast unless the data is collocated [3].


[1]
https://apacheignite.readme.io/docs/binary-marshaller#section-binaryobject-cache-api
[2]
https://apacheignite-sql.readme.io/docs/java-sql-api#section-sqlfieldsqueries
[3] https://apacheignite.readme.io/docs/affinity-collocation

On Tue, Nov 13, 2018 at 11:34 PM Ramin Farajollah (BLOOMBERG/ 731 LEX) <
rfarajol...@bloomberg.net> wrote:

> Hi,
>
> Please help me structure the cache to store instances of a type, say T.
> I'd like to cache the objects in two different ways:
>
> 1. By a unique key (k1), where the value is a single instance
> 2. By a non-unique key (k2), where the value is a list of instances
>
> Please comment on my approach:
>
> - Create a cache with k1 to an instance of T.
> - Create a second cache with k2 to a list of k1 keys.
> - To add a new instance of T, I will have to update both caches. Will this
> result in serializing the instance (in the first cache) and the list of
> keys (in the send cache)? -- Assume not on-heap.
> - If so, will each addition/deletion re-serialize the entire list in the
> second cache?
>
> Thank you!
> Ramin
>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: How could imporeve UNION ALL performance

2018-11-10 Thread Andrey Mashenkov
Hi,
 Can you share a query plan?
Do you have any indices on last and lastprice columns? Have you tried to
run queries separately without Union?


чт, 8 нояб. 2018 г., 23:47 wengyao04 wengya...@gmail.com:

> Hi we use ignite as our searching backend.
> Our cache is in replicated mode and primary_sync. Our query is more like
> the
> following. We find that that the query time is the sum of two sub-select
> query. Since our two sub select query is independent, is there a way to run
> them in parallel ? Thanks
>
> SELECT market.id, market.name, market.lastPrice
> FROM ((
>SELECT marketR.id, marketR.name, marketR.lastPrice
>FROM STOCK marketR
>JOIN TABLE(exchange  int = ?) ON i.exchange = marketR.exchange
>WHERE marketR.subscription = 'TRUE' AND marketR.last > 50.
>)
>UNION ALL
>(
> SELECT marketD.id, marketD.name, marketD.lastPrice
> FROM STOCK marketD
> JOIN TABLE(exchange  int = ?) ON i.exchange = marketD.exchange
> WHERE marketD.subscription = 'FALSE' AND marketD.lastPrice >
> 50.
>))
> market ORDER BY market.lastPrice DESC LIMIT 5000
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: OVER() clause in ignite

2018-10-24 Thread Andrey Mashenkov
Hi,

Ignite have no support window functions due to lack of underlying H2.
OVER is a part of SQL ANSI'03 standart, but Ignite claims to be ANSI'99
complaint.

The only workaround is to rewrite query with using joins or calculate
result manually.
User aggregate functions also are not supported and we have no plans for
this, see IGNITE-2177 [1].

There is ticket for Window-functions support IGNITE-6918 [2], but it is not
planned to any future version for now.

[1] https://issues.apache.org/jira/browse/IGNITE-2177
[2] https://issues.apache.org/jira/browse/IGNITE-6918

On Wed, Oct 24, 2018 at 9:55 AM Sriveena Mattaparthi <
sriveena.mattapar...@ekaplus.com> wrote:

> Thanks Mikael..Could you please suggest any other alternative to achieve
> this?
>
>
>
> Regards,
>
> Sriveena
>
>
>
> *From:* Mikael [mailto:mikael-arons...@telia.com]
> *Sent:* Wednesday, October 24, 2018 11:47 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: OVER() clause in ignite
>
>
>
> Hi!
>
> As far as I know H2 does not support OVER.
>
> Mikael
>
> Den 2018-10-24 kl. 07:28, skrev Sriveena Mattaparthi:
>
> Do we support over() clause in SQL in ignite like
>
> select count(*) over() totalrecords  from EMPLOYEE;
>
>
>
> please confirm.
>
>
>
> Thanks & Regards,
>
> Sriveena
>
>
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Query Slow

2018-10-02 Thread Andrey Mashenkov
Hi,
Ignite provides 2 way for configuring indices: queryEntity and annotation
based.
Seems, you either forget to setIndesTypes or tring mix both of these
approaches.

Please, find examples in documentation available via link I've provided
earlier.

пн, 1 окт. 2018 г., 18:05 Skollur :

> I tried with setting up field with @QuerySqlField (index = true) and still
> slow.
>
> Example :
> /** Value for customerId. */
> @QuerySqlField (index = true)
> private Long customerId
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Query Slow

2018-09-28 Thread Andrey Mashenkov
Please take a look at this.

https://apacheignite.readme.io/v2.6/docs/indexes#section-queryentity-based-configuration

29 сент. 2018 г. 3:41 пользователь "Skollur"  написал:

Thank you for suggestion. Can you give example how to create secondary
indices?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ingite Join returns wrong records

2018-09-28 Thread Andrey Mashenkov
Hi,

Try to use qry.setDistributedJoins(true). This should always return correct
result.

However, it has poor performance as due to intensive data exchange between
nodes.

By default, Ignite join only the data available locally on each node. Try
to collocate your data to get better performance with non distributed joins.

сб, 29 сент. 2018 г., 3:39 Skollur :

> I am using Apache Ignite 2.6 version. I have two tables as below i.e
> SUMMARY
> and SEQUENCE
>
> SUMMARY-> DW_Id bigint (Primary key) , Sumamry_Number varchar, Account_Type
> varchar
> SEQUENCE-> DW_Id bigint (Primary key) , Account_Type varchar
>
> Database and cache has same number of records in both tables. Database JOIN
> query returns 1500 counts/records and However IGNITE JOIN returns only 4
> counts and 4 records. Ignite cache is build based on auto generated web
> console. Query used is as below. There is no key involved while joining two
> cache tables here from two cache.tables. This is simple join based on
> value(i.e account type - string). How to get correct value for JOIN in
> Ignite?
>
> SELECT COUNT(*) FROM SUMMARY LIQ
> INNER JOIN SEQUENCE CPS ON
> LIQ.Account_Type = CPS.Account_Type
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Query Slow

2018-09-28 Thread Andrey Mashenkov
Hi,

Please, try to create secondary indices on join columns, otherwise query
will fall into full table scan.

Then if you will see SCANs, as next step, you can try to rewrite your query
with different tables join order. Sometimes, underline H2 can change join
order to non optimal. In that case qry.setEnforceJoinOrder(true) may be
helpful.

Looks like there should be a single lookup on ID column, and 2 index scans
for joining.

пт, 28 сент. 2018 г., 19:02 Skollur :

> Here is the explain query
>
> #   PLAN
> 1   "SELECT
> ADDR__Z2.ADDRESS_LINE_1 AS __C0_0,
> ADDR__Z2.ADDRESS_LINE_2 AS __C0_1,
> ADDR__Z2.ADDRESS_LINE_3 AS __C0_2,
> ADDR__Z2.STREET AS __C0_3,
> ADDR__Z2.CITY AS __C0_4,
> ADDR__Z2.STATE AS __C0_5,
> ADDR__Z2.COUNTRY AS __C0_6,
> ADDR__Z2.ZIP_POSTAL AS __C0_7
> FROM "GroupAddressCache".GROUP_ADDRESS GA__Z1
> /* "GroupAddressCache".GROUP_ADDRESS.__SCAN_ */
> /* WHERE (GA__Z1.ADDRESS_TYPE = 'Mailing')
> AND (GA__Z1.RECORD_IS_VALID = 'Y')
> */
> INNER JOIN "GroupCache"."[GROUP]" GRP__Z0
> /* "GroupCache"."[GROUP]".__SCAN_ */
> ON 1=1
> /* WHERE (GRP__Z0.RECORD_IS_VALID = 'Y')
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
> AND (GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID))
> */
> INNER JOIN "AddressCache".ADDRESS ADDR__Z2
> /* "AddressCache"."_key_PK_proxy": DW_ID = GA__Z1.ADDRESS_ID */
> ON 1=1
> WHERE (GA__Z1.ADDRESS_ID = ADDR__Z2.DW_ID)
> AND ((GA__Z1.ADDRESS_TYPE = 'Mailing')
> AND ((GA__Z1.RECORD_IS_VALID = 'Y')
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID)
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
> AND (GRP__Z0.RECORD_IS_VALID = 'Y')"
> 2   "SELECT
> __C0_0 AS ADDRESS_LINE_1,
> __C0_1 AS ADDRESS_LINE_2,
> __C0_2 AS ADDRESS_LINE_3,
> __C0_3 AS STREET,
> __C0_4 AS CITY,
> __C0_5 AS STATE,
> __C0_6 AS COUNTRY,
> __C0_7 AS ZIP_POSTAL
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */"
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Query execution too long even after providing index

2018-09-03 Thread Andrey Mashenkov
HI,

Have you tried to increase index inlineSize? It is 10 bytes by default.

Your indices uses simple value types (Java primitives) and all columns can
be easily inlined.
It should be enough to increase inlineSize up to 32 bytes (3 longs + 1 int
= 3*(8 /*long*/ + 1/*type code*/) + (4/*int*/ + 1/*type code*/)) to inline
all columns for the idx1, and up to 27 (3 longs) for idx2.

You can try to benchmark queries with different inline sizes to find
optimal ratio between speedup and index size.



On Mon, Sep 3, 2018 at 5:12 PM Prasad Bhalerao 
wrote:

> Hi,
> My cache has 1 million rows and the sql is as follows.
> This sql is taking around 1.836 seconds to execute and this time increases
> as I go on adding the data to this cache. Some time it takes more than 4
> seconds.
>
> Is there any way to improve the execution time?
>
> *SQL:*
> SELECT id, moduleId,ipEnd, ipStart
> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
> WHERE subscriptionId = ?  AND moduleId = ? AND (ipStart<=
> ? AND ipEnd   >= ?)
> UNION ALL
> SELECT id, moduleId,ipEnd, ipStart
> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart<= ?
> AND ipEnd   >= ?)
> UNION ALL
> SELECT id, moduleId,ipEnd, ipStart
> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart>= ?
> AND ipEnd   <= ?)
>
> *Indexes are as follows:*
>
> public class IpContainerIpV4Data implements Data, 
> UpdatableData {
>
>   @QuerySqlField
>   private long id;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ip_container_ipv4_idx1", order = 1)})
>   private int moduleId;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ip_container_ipv4_idx1", order = 0),
>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 0)})
>   private long subscriptionId;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ip_container_ipv4_idx1", order = 3, descending = true),
>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 2, 
> descending = true)})
>   private long ipEnd;
>
>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
> "ip_container_ipv4_idx1", order = 2),
>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 1)})
>   private long ipStart;
>
> }
>
>
> *Execution Plan:*
>
> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
> __Z0.ID AS __C0_0,
> __Z0.MODULEID AS __C0_1,
> __Z0.IPEND AS __C0_2,
> __Z0.IPSTART AS __C0_3
> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 USE INDEX
> (IP_CONTAINER_IPV4_IDX1)
> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID = ?1
> AND MODULEID = ?2
> AND IPSTART <= ?3
> AND IPEND >= ?4
>  */
> WHERE ((__Z0.SUBSCRIPTIONID = ?1)
> AND (__Z0.MODULEID = ?2))
> AND ((__Z0.IPSTART <= ?3)
> AND (__Z0.IPEND >= ?4))
> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
> __Z1.ID AS __C1_0,
> __Z1.MODULEID AS __C1_1,
> __Z1.IPEND AS __C1_2,
> __Z1.IPSTART AS __C1_3
> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z1 USE INDEX
> (IP_CONTAINER_IPV4_IDX1)
> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID = ?5
> AND MODULEID = ?6
> AND IPSTART <= ?7
> AND IPEND >= ?8
>  */
> WHERE ((__Z1.SUBSCRIPTIONID = ?5)
> AND (__Z1.MODULEID = ?6))
> AND ((__Z1.IPSTART <= ?7)
> AND (__Z1.IPEND >= ?8))
> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
> __Z2.ID AS __C2_0,
> __Z2.MODULEID AS __C2_1,
> __Z2.IPEND AS __C2_2,
> __Z2.IPSTART AS __C2_3
> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z2 USE INDEX
> (IP_CONTAINER_IPV4_IDX1)
> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID = ?9
> AND MODULEID = ?10
> AND IPSTART >= ?11
> AND IPEND <= ?12
>  */
> WHERE ((__Z2.SUBSCRIPTIONID = ?9)
> AND (__Z2.MODULEID = ?10))
> AND ((__Z2.IPSTART >= ?11)
> AND (__Z2.IPEND <= ?12))
> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - ((SELECT
> __C0_0 AS ID,
> __C0_1 AS MODULEID,
> __C0_2 AS IPEND,
> __C0_3 AS IPSTART
> FROM PUBLIC.__T0
> /* IP_CONTAINER_IPV4_CACHE."merge_scan" */)
> UNION ALL
> (SELECT
> __C1_0 AS ID,
> __C1_1 AS MODULEID,
> __C1_2 AS IPEND,
> __C1_3 AS IPSTART
> FROM PUBLIC.__T1
> /* IP_CONTAINER_IPV4_CACHE."merge_scan" */))
> UNION ALL
> (SELECT
> __C2_0 AS ID,
> __C2_1 AS MODULEID,
> __C2_2 AS IPEND,
> __C2_3 AS IPSTART
> FROM PUBLIC.__T2
> /* IP_CONTAINER_IPV4_CACHE."merge_scan" */)
>
>
>

-- 
Best 

Re: Query 3x slower with index

2018-09-03 Thread Andrey Mashenkov
Hi

Actually, first query uses index on affinity key which looks more efficient
than index on category_id column.
The first query can process groups one by one and stream partial results
from map phase to reduce phase as it use sorted index lookup,
while second query should process full dataset on map phase before pass it
for reducing.

Try to use composite index (customer_id, category_id).

Also, SqlQueryFields.setCollocated(true) flag can help Ignite to build more
efficient plan when group by on collocated column is used.

On Sun, Sep 2, 2018 at 2:02 AM eugene miretsky 
wrote:

> Hello,
>
> Schema:
>
>-
>
>PUBLIC.GATABLE2.CUSTOMER_ID
>
>PUBLIC.GATABLE2.DT
>
>PUBLIC.GATABLE2.CATEGORY_ID
>
>PUBLIC.GATABLE2.VERTICAL_ID
>
>PUBLIC.GATABLE2.SERVICE
>
>PUBLIC.GATABLE2.PRODUCT_VIEWS_APP
>
>PUBLIC.GATABLE2.PRODUCT_CLICKS_APP
>
>PUBLIC.GATABLE2.PRODUCT_VIEWS_WEB
>
>PUBLIC.GATABLE2.PRODUCT_CLICKS_WEB
>
>PUBLIC.GATABLE2.PDP_SESSIONS_APP
>
>PUBLIC.GATABLE2.PDP_SESSIONS_WEB
>- pkey = customer_id,dt
>- affinityKey = customer
>
> Query:
>
>- select COUNT(*) FROM( Select customer_id from GATABLE2 where
>category_id in (175925, 101450, 9005, 175930, 175930, 175940,175945,101450,
>6453) group by customer_id having SUM(product_views_app) > 2 OR
>SUM(product_clicks_app) > 1 )
>
> The table has 600M rows.
> At first, the query took 1m, when we added an index on category_id the
> query started taking 3m.
>
> The SQL execution plan for both queries is attached.
>
> We are using a single x1.16xlarge insntace with query parallelism set to
> 32
>
> Cheers,
> Eugene
>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Slow SQL query uses only a single CPU

2018-08-22 Thread Andrey Mashenkov
1. /* PUBLIC.AFFINITY_KEY */ means index on affinity column is used. Full
index will be scanned against date condition.
As I wrote you can create composite index to speedup index scan.
2. "group sorted" means index is used for grouping. Looks like H2 have
optimization for this and grouping can applied on fly.
Unsorted grouping would means that we have to fetch full dataset and only
then grouping.

On Wed, Aug 22, 2018 at 5:21 PM eugene miretsky 
wrote:

> Just as a reference, bellow are 2 execution plans with and without the
> index on a very similar table.
>
> Adding the index remove /* PUBLIC.AFFINITY_KEY */ and /* group sorted */.
> 1) Does PUBLIC.AFFINITY_KEY mean that DT is the affinity key. We are
> setting customer_id as an affinity key. Is there a way to verify that?
> 2) Is it possible that the removal of /* group sorted */ indicates that
> the result of group_by must be sorted? (hence taking a long time)
>
> *Query*
> Select COUNT (*) FROM (SELECT customer_id FROM GAL2RU where dt >
> '2018-06-12' GROUP BY customer_id having SUM(ru_total_app_sessions_count) >
> 2 AND MAX(ru_total_web_sessions_count) < 1)
>
> *Without an index*
>
> SELECT
>
> __Z0.CUSTOMER_ID AS __C0_0,
>
> SUM(__Z0.RU_TOTAL_APP_SESSIONS_COUNT) AS __C0_1,
>
> MAX(__Z0.RU_TOTAL_WEB_SESSIONS_COUNT) AS __C0_2
>
> FROM PUBLIC.GAL2RU __Z0
>
> /* PUBLIC.AFFINITY_KEY */
>
> WHERE __Z0.DT > '2018-06-12'
>
> GROUP BY __Z0.CUSTOMER_ID
>
> /* group sorted */
>
>
> SELECT
>
> COUNT(*)
>
> FROM (
>
> SELECT
>
> __C0_0 AS CUSTOMER_ID
>
> FROM PUBLIC.__T0
>
> GROUP BY __C0_0
>
> HAVING (SUM(__C0_1) > 2)
>
> AND (MAX(__C0_2) < 1)
>
>
> *With an index*
>
> SELECT
>
> __Z0.CUSTOMER_ID AS __C0_0,
>
> SUM(__Z0.RU_TOTAL_APP_SESSIONS_COUNT) AS __C0_1,
>
> MAX(__Z0.RU_TOTAL_WEB_SESSIONS_COUNT) AS __C0_2
>
> FROM PUBLIC.GAL2RU __Z0
>
> /* PUBLIC.DT_IDX2: DT > '2018-06-12' */
>
> WHERE __Z0.DT > '2018-06-12'
>
> GROUP BY __Z0.CUSTOMER_ID
>
>
> SELECT
>
> COUNT(*)
>
> FROM (
>
> SELECT
>
> __C0_0 AS CUSTOMER_ID
>
> FROM PUBLIC.__T0
>
> GROUP BY __C0_0
>
> HAVING (SUM(__C0_1) > 2)
>
> AND (MAX(__C0_2) < 1)
>
> ) _0__Z1
>
>
> On Wed, Aug 22, 2018 at 9:43 AM, eugene miretsky <
> eugene.miret...@gmail.com> wrote:
>
>> Thanks Andrey,
>>
>> We are using the Ignite notebook, any idea if there is a way to provide
>> these flags and hints directly from SQL?
>>
>> From your description, it seems like the query is executed in the
>> following order
>> 1) Group by customer_id
>> 2) For each group, perform the filtering on date using the index and
>> aggregates
>>
>> My impressions was that the order is
>> 1)  On each node, filter rows by date (using the index)
>> 2)  On each node, group by the remaining rows by customer id, and then
>> perform the aggrate
>>
>> That's why we created the index on the dt field, as opposed to
>> customer_id field.
>>
>> Cheers,
>> Eugene
>>
>>
>> On Wed, Aug 22, 2018 at 8:44 AM, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Eugene,
>>>
>>> 1. Note that queryParallelism splits indices and Ignite work similar way
>>> as if index data resides on several nodes. These index part can be looked
>>> up in parallel threads.
>>> 2. It is not a simple query as you data distributed among partitions and
>>> is not collocated and aggregate function are used.
>>> HAVING clause here is a reason, Ignite can apply it on reduce phase only
>>> as HAVING requires aggregate value from all index parts.
>>> 3. If you data already collocated on customer_id then you can hit Ignite
>>> with set SqlFieldsQuery.setCollocated(true). This should force Ignite to
>>> optimize grouping and push down aggregates to map phase.
>>> 4. In query plan you attached you can see H2 uses DT_IDX
>>> /* PUBLIC.DT_IDX: DT > '2018-05-12' */
>>> It is not effective. With this index H2 have to process all data to
>>> calculate aggregate for group. Index on affinity field may be more
>>> effective as data can be processed group by group.
>>> once all group data is process then result can be passed to reducer.
>>> Hope, H2 is smart enough to do such streaming.
>>>
>>> Also, you can try to use composite index on (customer_id, date) columns.
>>> Most likely. hint will needed [2].
>>>
>>> See also about collocated flag [1] and Hits [2]
>>>
>>> [1]
>

Re: Slow SQL query uses only a single CPU

2018-08-22 Thread Andrey Mashenkov
Eugene,

1. Note that queryParallelism splits indices and Ignite work similar way as
if index data resides on several nodes. These index part can be looked up
in parallel threads.
2. It is not a simple query as you data distributed among partitions and is
not collocated and aggregate function are used.
HAVING clause here is a reason, Ignite can apply it on reduce phase only as
HAVING requires aggregate value from all index parts.
3. If you data already collocated on customer_id then you can hit Ignite
with set SqlFieldsQuery.setCollocated(true). This should force Ignite to
optimize grouping and push down aggregates to map phase.
4. In query plan you attached you can see H2 uses DT_IDX
/* PUBLIC.DT_IDX: DT > '2018-05-12' */
It is not effective. With this index H2 have to process all data to
calculate aggregate for group. Index on affinity field may be more
effective as data can be processed group by group.
once all group data is process then result can be passed to reducer. Hope,
H2 is smart enough to do such streaming.

Also, you can try to use composite index on (customer_id, date) columns.
Most likely. hint will needed [2].

See also about collocated flag [1] and Hits [2]

[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/SqlFieldsQuery.html#setCollocated-boolean-
[2]
https://apacheignite.readme.io/v2.0/docs/sql-performance-and-debugging#index-hints


On Wed, Aug 22, 2018 at 3:10 PM eugene miretsky 
wrote:

> Thanks Andrey,
>
> Right now we are testing with only one big node, so the reduce step should
> not take any time.
>
> 1) We already set parallelism to 32, and I can still see only 1 core
> working. Anything else could be preventing multiple cores from working on
> the job?
> 2) Why would the reduce phase need to look at all the data? It seems like
> a fairly simple query
> 3) We are already collocating data  by customer_id (though as I mentioned,
> right now there is only one node)
> 4) We already using collocation and tried using an index, and other
> advice? Is there a way to check what Ignite is actually doing? How are
> indexs used (by Ignite or H2)?
>
> Cheers,
> Eugene
>
> On Wed, Aug 22, 2018 at 3:54 AM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> 1. Possible there are too much data should be looked for the query. With
>> single node and parallelism=1 query will always run in single thread.
>>  You can try to add more nodes or increase query parallelism to utilize
>> more CPU cores.
>>
>> 2. Index on date field may be not effective as reduce phase should look
>> all the data for further grouping.
>> Try add index on customer_id or use collocation in customer_id (usually
>> more preferable way).
>>
>> Also it is possible the bottleneck is the reduce phase.
>> Is it possible to collocate data by group by column  (customer_id)? This
>> collocation will allow you use collocated flag [1] and Ignite will use more
>> optimal plan.
>>
>> 4. The main techniques is trying to reduce amount to data to be looked up
>> on every phase with using data collocation and indices
>> Ignite provide 2 plans for distributed queries: map and reduce. You can
>> analyse and check these queries separately to understand how much data are
>> processed on map phase and on reduce.
>> Map query process node local data (until distributed joins on), while
>> reduce fetch data from remote node that may costs. .
>>
>>
>> On Wed, Aug 22, 2018 at 6:07 AM eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Here is the result of EXPLAIN for the afermantioned query:
>>>
>>> SELECT
>>> __Z0.CUSTOMER_ID AS __C0_0,
>>> SUM(__Z0.EC1_BKNT_TOTAL_PRODUCT_VIEWS_APP) AS __C0_1,
>>> MAX(__Z0.EC1_HNK_TOTAL_PRODUCT_CLICKS_APP) AS __C0_2
>>> FROM PUBLIC.GAL3EC1 __Z0
>>> /* PUBLIC.DT_IDX: DT > '2018-05-12' */
>>> WHERE __Z0.DT > '2018-05-12'
>>> GROUP BY __Z0.CUSTOMER_ID
>>> SELECT
>>> COUNT(*)
>>> FROM (
>>> SELECT
>>> __C0_0 AS CUSTOMER_ID
>>> FROM PUBLIC.__T0
>>> GROUP BY __C0_0
>>> HAVING (SUM(__C0_1) > 2)
>>> AND (MAX(__C0_2) < 1)
>>> ) _0__Z1
>>>
>>>
>>>
>>> On Tue, Aug 21, 2018 at 8:18 PM, eugene miretsky <
>>> eugene.miret...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> We have a cache called GAL3EC1, it has
>>>>
>>>>1. A composite pKey consisting of customer_id and date
>>>>2. An Index on the date column
>>>>3. 300 sparse columns
>>>>
>>>> We are running 

Re: In the case of read-through cache, both Replicated and Partitioned mode are identical?

2018-08-22 Thread Andrey Mashenkov
Hi,

CacheStore semantic suppose you will warm-up a cache via calling
cache.loadCache().
When you use read-through and forget load cache from back store then it is
ok if no backups are updated on read operation.

Backups and replicated cache in case of read-through just add some
tolerance to node-loss and allow you to not fallback to read-through after
cache was warned-up.

If you don't care about warm-up then most likely you are ok with
read-through happens even if some other node already cached a value.

AFAIK, read-through caches values localy on backup nodes to prevent further
read from remote primary, that are expected in case of replicated cache.



On Wed, Aug 22, 2018 at 2:20 PM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> I am just wondering would you really  need backup partitions in case of
> replicated mode? The whole data is replicated, so I think each node itself
> is kind of backup for other.
>
>
> On Wed, Aug 22, 2018, 2:41 PM yonggu.lee  wrote:
>
>> I have a read-through cache backed by a hbase table. I this case, is
>> there no
>> difference between Replicated and Partitioned mode?
>>
>> When I tested, although I set up a Replicated mode, the cache only had
>> Primary, and not Backup part (in web console, monitoring tab)
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Slow SQL query uses only a single CPU

2018-08-22 Thread Andrey Mashenkov
Hi,

1. Possible there are too much data should be looked for the query. With
single node and parallelism=1 query will always run in single thread.
 You can try to add more nodes or increase query parallelism to utilize
more CPU cores.

2. Index on date field may be not effective as reduce phase should look all
the data for further grouping.
Try add index on customer_id or use collocation in customer_id (usually
more preferable way).

Also it is possible the bottleneck is the reduce phase.
Is it possible to collocate data by group by column  (customer_id)? This
collocation will allow you use collocated flag [1] and Ignite will use more
optimal plan.

4. The main techniques is trying to reduce amount to data to be looked up
on every phase with using data collocation and indices
Ignite provide 2 plans for distributed queries: map and reduce. You can
analyse and check these queries separately to understand how much data are
processed on map phase and on reduce.
Map query process node local data (until distributed joins on), while
reduce fetch data from remote node that may costs. .


On Wed, Aug 22, 2018 at 6:07 AM eugene miretsky 
wrote:

> Here is the result of EXPLAIN for the afermantioned query:
>
> SELECT
> __Z0.CUSTOMER_ID AS __C0_0,
> SUM(__Z0.EC1_BKNT_TOTAL_PRODUCT_VIEWS_APP) AS __C0_1,
> MAX(__Z0.EC1_HNK_TOTAL_PRODUCT_CLICKS_APP) AS __C0_2
> FROM PUBLIC.GAL3EC1 __Z0
> /* PUBLIC.DT_IDX: DT > '2018-05-12' */
> WHERE __Z0.DT > '2018-05-12'
> GROUP BY __Z0.CUSTOMER_ID
> SELECT
> COUNT(*)
> FROM (
> SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> AND (MAX(__C0_2) < 1)
> ) _0__Z1
>
>
>
> On Tue, Aug 21, 2018 at 8:18 PM, eugene miretsky <
> eugene.miret...@gmail.com> wrote:
>
>> Hi,
>>
>> We have a cache called GAL3EC1, it has
>>
>>1. A composite pKey consisting of customer_id and date
>>2. An Index on the date column
>>3. 300 sparse columns
>>
>> We are running a single EC2 4x8xlarge node.
>>
>> The following query takes 8min to finish
>> Select COUNT (*) FROM (SELECT customer_id FROM GAl3ec1 where dt >
>> '2018-05-12' GROUP BY customer_id having
>> SUM(ec1_bknt_total_product_views_app) > 2 AND
>> MAX(ec1_hnk_total_product_clicks_app) < 1)
>>
>> I have a few questions:
>>
>>1. 'top' command shows %100 cpu utilization (i.e only one of the 32
>>CPUs is used). How can I get the query to use all 32 CPUs? I have tried
>>setting Query Parallelism to 32, but it didn't help,
>>2. Adding the index on date column seems to have slowed down the
>>query. The 8min time from above was without the index, with the index the
>>query doesn't finish (I gave up after 30min). A similar query on a
>>smaller date range showed a 10x slow down with the index. Why?
>>3. Our loads from Spark are very slow as well, and also seem to not
>>use the system resource properly, can that be related?
>>4. What are some good tools and techniques to troubleshoot these
>>problems in Ignite?
>>
>>
>> All the relevant info is attached (configs, cache stats, node stats,
>> etc.).
>>
>> Cheers,
>> Eugene
>>
>>
>>
>>
>>
>>
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Query execution too slow

2018-08-21 Thread Andrey Mashenkov
Hi,

Composite index field order matters. Try to set order=3 for "value" column.

On Tue, Aug 21, 2018 at 10:43 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, I am afraid that you will need another index.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-08-21 19:55 GMT+03:00 Prasad Bhalerao :
>
>> Hi,
>>
>> Thank you for pointing out the mistake.
>> After changing the order to 1 as follows, SQL executed quickly.
>>
>> public class DnsNetBiosAssetGroupData implements 
>> Data,UpdatableData {
>>
>>   @QuerySqlField
>>   private long id;
>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>> "dns_nb_asset_group_data_idx1", order = 2)})
>>   private long assetGroupId;
>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>> "dns_nb_asset_group_data_idx1", order = 3)})
>>   private int assetTypeInd;
>>   private int partitionId;
>>   @QuerySqlField
>>   private long subscriptionId;
>>   @QuerySqlField
>>   private long updatedDate;
>>   //@QuerySqlField (index = true)
>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>> "dns_nb_asset_group_data_idx1", order = 1)})
>>   private String value;
>>
>>
>> But now with this change I cannot use the same index for condition, "wh
>> re assetGroupId=? and assetTypeInd=?" . Do I have to create separate
>> group index on assetGroupId and assetTypeInd ?
>>
>>
>>
>>
>>
>>
>> Thanks,
>> Prasad
>>
>> On Tue, Aug 21, 2018 at 9:29 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I think that value should have order of 1 to be used here. Can you try
>>> that?
>>>
>>> Regards,
>>>
>>> --
>>> Ilya Kasnacheev
>>>
>>> 2018-08-21 18:56 GMT+03:00 Prasad Bhalerao >> >:
>>>
 Original Sql query:

 SELECT dnsnb.value,
   dnsnb.id
 FROM DnsNetBiosAssetGroupData dnsnb
 JOIN TABLE (value VARCHAR  = ? ) temp
 ON dnsnb.value = temp.value
 WHERE dnsnb.subscriptionId = ?
 AND dnsnb.assetGroupId = ?
 AND dnsnb.assetTypeInd = ?

 temp table list has around 1_00_000 values.

 I also tried changing the indexes as follows. But it did not work.

 public class DnsNetBiosAssetGroupData implements 
 Data,UpdatableData {

   @QuerySqlField
   private long id;
   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "dns_nb_asset_group_data_idx1", order = 1)})
   private long assetGroupId;
   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "dns_nb_asset_group_data_idx1", order = 2)})
   private int assetTypeInd;
   private int partitionId;
   @QuerySqlField
   private long subscriptionId;
   @QuerySqlField
   private long updatedDate;
   //@QuerySqlField (index = true)
   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "dns_nb_asset_group_data_idx1", order = 3)})
   private String value;


 Thanks,
 Prasad

 On Tue, Aug 21, 2018 at 7:35 PM ilya.kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> Can you please show the original query that you are running?
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

>>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: And again... Failed to get page IO instance (page content is corrupted)

2018-06-29 Thread Andrey Mashenkov
Hi Oleg,

Yes, page corruption issues shouldn't happened when persistence is disabled.
Please, let us know it you will face one.


On Fri, Jun 29, 2018 at 1:56 AM Olexandr K 
wrote:

> Hi Andrey,
>
> Thanks for clarifying this.
> We have just a single persistent cache and I reworked the code to get rid
> of expiration policy.
> All our non-persistent caches have expiration policy but this should not
> be a problem, right?
>
> BR, Oleksandr
>
> On Thu, Jun 28, 2018 at 8:37 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi Oleg,
>>
>> The issue you mentioned IGNITE-8659 [1] is caused by IGNITE-5874 [2] that
>> will not a part of ignite-2.6 release.
>> For now, 'ExpiryPolicy with persistence' is totally broken and all it's
>> fixes are planned to the next 2.7 release.
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-8659
>> [2] https://issues.apache.org/jira/browse/IGNITE-5874
>>
>> On Tue, Jun 26, 2018 at 11:26 PM Olexandr K <
>> olexandr.kundire...@gmail.com> wrote:
>>
>>> Hi Andrey,
>>>
>>> I see Fix version 2.7 in Jira:
>>> https://issues.apache.org/jira/browse/IGNITE-8659
>>> This is a critical bug.. bouncing of server node in not-a-right-time
>>> causes a catastrophe.
>>> This mean no availability in fact - I had to clean data folders to start
>>> my cluster after that
>>>
>>> BR, Oleksandr
>>>
>>>
>>> On Fri, Jun 22, 2018 at 4:06 PM, Andrey Mashenkov <
>>> andrey.mashen...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> We've found and fixed few issues related to ExpiryPolicy usage.
>>>> Most likely, your issue is [1] and it is planned to ignite 2.6 release.
>>>>
>>>> [1] https://issues.apache.org/jira/browse/IGNITE-8659
>>>>
>>>>
>>>> On Fri, Jun 22, 2018 at 8:43 AM Olexandr K <
>>>> olexandr.kundire...@gmail.com> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> Issue is still there in 2.5.0
>>>>>
>>>>> Steps to reproduce:
>>>>> 1) start 2 servers + 2 clients topology
>>>>> 2) start load testing on client nodes
>>>>> 3) stop server 1
>>>>> 4) start server 1
>>>>> 5) stop server 1 again when rebalancing is in progress
>>>>> => and we got data corrupted here, see error below
>>>>> => we were not able to restart Ignite cluster after that and need to
>>>>> perform data folders cleanup...
>>>>>
>>>>> 2018-06-21 11:28:01.684 [ttl-cleanup-worker-#43] ERROR  - Critical
>>>>> system error detected. Will be handled accordingly to configured handler
>>>>> [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
>>>>> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
>>>>> o.a.i.IgniteException: Runtime failure on bounds: [lower=null,
>>>>> upper=PendingRow [
>>>>> org.apache.ignite.IgniteException: Runtime failure on bounds:
>>>>> [lower=null, upper=PendingRow []]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>>>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>>>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1024)
>>>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>>>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
>>>>> [ignite-core-2.5.0.jar:2.5.0]
>>>>> at
>>>>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>>>>> [ignite-core-2.5.0.jar:2.5.0]
>>>>> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
>>>>> Caused by: java.lang.IllegalStateException: Item not found: 2
>>>>> at
>>>>> org.apache.ignite.internal.processors.cache.persistence.tree.io

Re: And again... Failed to get page IO instance (page content is corrupted)

2018-06-28 Thread Andrey Mashenkov
Hi Oleg,

The issue you mentioned IGNITE-8659 [1] is caused by IGNITE-5874 [2] that
will not a part of ignite-2.6 release.
For now, 'ExpiryPolicy with persistence' is totally broken and all it's
fixes are planned to the next 2.7 release.


[1] https://issues.apache.org/jira/browse/IGNITE-8659
[2] https://issues.apache.org/jira/browse/IGNITE-5874

On Tue, Jun 26, 2018 at 11:26 PM Olexandr K 
wrote:

> Hi Andrey,
>
> I see Fix version 2.7 in Jira:
> https://issues.apache.org/jira/browse/IGNITE-8659
> This is a critical bug.. bouncing of server node in not-a-right-time
> causes a catastrophe.
> This mean no availability in fact - I had to clean data folders to start
> my cluster after that
>
> BR, Oleksandr
>
>
> On Fri, Jun 22, 2018 at 4:06 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> We've found and fixed few issues related to ExpiryPolicy usage.
>> Most likely, your issue is [1] and it is planned to ignite 2.6 release.
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-8659
>>
>>
>> On Fri, Jun 22, 2018 at 8:43 AM Olexandr K 
>> wrote:
>>
>>> Hi Team,
>>>
>>> Issue is still there in 2.5.0
>>>
>>> Steps to reproduce:
>>> 1) start 2 servers + 2 clients topology
>>> 2) start load testing on client nodes
>>> 3) stop server 1
>>> 4) start server 1
>>> 5) stop server 1 again when rebalancing is in progress
>>> => and we got data corrupted here, see error below
>>> => we were not able to restart Ignite cluster after that and need to
>>> perform data folders cleanup...
>>>
>>> 2018-06-21 11:28:01.684 [ttl-cleanup-worker-#43] ERROR  - Critical
>>> system error detected. Will be handled accordingly to configured handler
>>> [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
>>> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
>>> o.a.i.IgniteException: Runtime failure on bounds: [lower=null,
>>> upper=PendingRow [
>>> org.apache.ignite.IgniteException: Runtime failure on bounds:
>>> [lower=null, upper=PendingRow []]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1024)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
>>> [ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>>> [ignite-core-2.5.0.jar:2.5.0]
>>> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
>>> Caused by: java.lang.IllegalStateException: Item not found: 2
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.findIndirectItemIndex(AbstractDataPageIO.java:341)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getDataOffset(AbstractDataPageIO.java:450)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.readPayload(AbstractDataPageIO.java:492)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:150)
>>> ~[ignite-core-2.5.0.jar:2.5.0]
>>> at
>>> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
>>> ~[ignite-core-2.5.0.j
>>>
>>> BR, Oleksandr
>>>
>>> On Thu, Jun 14, 2018 at 2:51 PM, Olexandr K <
>>> olexandr.kundire...@gmail.com> wrote:
>>>
>>>> Upgraded to 2.5.0 and didn't get such error so far..
>>>> Thanks!
>>>>
>>>> On Wed, Jun 13, 2018 at 4:58 PM, dkarachentsev <
>>>> dkarachent...@gridgain.com>

Re: Ignite 2.4 & 2.5 together?

2018-06-25 Thread Andrey Mashenkov
Hi,

It is recommended to set baseline at first if grid was upgraded from <2.4
version.
Then you need to deactivate the grid to force checkpoint, this can reduce
time of next startup.
Then stop grid and update jars to newer version and start the grid.
Then you may be need to activate grid if it was deactivated on first step.


On Mon, Jun 25, 2018 at 12:19 PM Michaelikus  wrote:

> ok, thanks
>
> So where i can find information about migration procedure?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite 2.4 & 2.5 together?

2018-06-25 Thread Andrey Mashenkov
Hi,

It should not be possible by default and highly not recommended.
As there can be differences in communication protocol between versions ans
compatibility is not tested.

However, persistence format should be backward compatible that means Ignite
newer version can work with persistence created with older version.

On Mon, Jun 25, 2018 at 11:30 AM Michaelikus  wrote:

> Hi ppl!
>
> I have cluser of 5 nodes ignite 2.4.
>
> Is it possible to add new nodes with 2.5 version to it?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: And again... Failed to get page IO instance (page content is corrupted)

2018-06-22 Thread Andrey Mashenkov
Hi,

We've found and fixed few issues related to ExpiryPolicy usage.
Most likely, your issue is [1] and it is planned to ignite 2.6 release.

[1] https://issues.apache.org/jira/browse/IGNITE-8659


On Fri, Jun 22, 2018 at 8:43 AM Olexandr K 
wrote:

> Hi Team,
>
> Issue is still there in 2.5.0
>
> Steps to reproduce:
> 1) start 2 servers + 2 clients topology
> 2) start load testing on client nodes
> 3) stop server 1
> 4) start server 1
> 5) stop server 1 again when rebalancing is in progress
> => and we got data corrupted here, see error below
> => we were not able to restart Ignite cluster after that and need to
> perform data folders cleanup...
>
> 2018-06-21 11:28:01.684 [ttl-cleanup-worker-#43] ERROR  - Critical system
> error detected. Will be handled accordingly to configured handler
> [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler,
> failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
> o.a.i.IgniteException: Runtime failure on bounds: [lower=null,
> upper=PendingRow [
> org.apache.ignite.IgniteException: Runtime failure on bounds: [lower=null,
> upper=PendingRow []]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:971)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.find(BPlusTree.java:950)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1024)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:197)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManager$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:137)
> [ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-2.5.0.jar:2.5.0]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> Caused by: java.lang.IllegalStateException: Item not found: 2
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.findIndirectItemIndex(AbstractDataPageIO.java:341)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getDataOffset(AbstractDataPageIO.java:450)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.readPayload(AbstractDataPageIO.java:492)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:150)
> ~[ignite-core-2.5.0.jar:2.5.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
> ~[ignite-core-2.5.0.j
>
> BR, Oleksandr
>
> On Thu, Jun 14, 2018 at 2:51 PM, Olexandr K  > wrote:
>
>> Upgraded to 2.5.0 and didn't get such error so far..
>> Thanks!
>>
>> On Wed, Jun 13, 2018 at 4:58 PM, dkarachentsev <
>> dkarachent...@gridgain.com> wrote:
>>
>>> It would be better to upgrade to 2.5, where it is fixed.
>>> But if you want to overcome this issue in your's version, you need to add
>>> ignite-indexing dependency to your classpath and configure SQL indexes.
>>> For
>>> example [1], just modify it to work with Spring in XML:
>>> 
>>> 
>>> org.your.KeyObject
>>> org.your.ValueObject
>>> 
>>> 
>>>
>>> [1]
>>>
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-registering-indexed-types
>>>
>>> Thanks!
>>> -Dmitry
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>

-- 
Best regards,
Andrey V. Mashenkov


Re: Quick questions on Evictions

2018-06-21 Thread Andrey Mashenkov
Yes, you are right.

On Wed, Jun 20, 2018 at 4:13 PM the_palakkaran  wrote:

> So to conclude, if I have enabled on heap storage for cache(using
> cache.setOnHeapEnabled(true),
> then :
> 1. Still data will be stored off heap, but will be loaded to heap. To
> escape
> out of memory error, I have to set eviction policies.
> 2. Off heap entries will be written to disk based on data page eviction
> mode
> enabled on data region. This size is limited based on initial and maximum
> size of data region.
>
> I hope this is how it works. Big thanks !!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Quick questions on Evictions

2018-06-20 Thread Andrey Mashenkov
DataRegionConfiguration.setMaxSize() should be used to limit offheap memory
usage.

On Tue, Jun 19, 2018 at 2:35 PM the_palakkaran  wrote:

> So how do I limit cache size if ignite native persistence is enabled using
> dataRegionCfg.setPersistenceEnabled(true)? I don't want it to keep a lot of
> data in memory and others may be kept on disk. That is the requirement.
>
> Also, I do have on heap cache enabled. But I read in many threads that
> Ignite stores everything off heap and only references may be kept in java
> heap. So how do I limit off heap cache usage? I mean I don't want to it to
> use more than a particular amount of off heap memory?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: SQL Query full table scan, Node goes down

2018-06-20 Thread Andrey Mashenkov
Hi,

Try to set Lazy flag for  SQL query [1].

[1]
https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load

On Tue, Jun 19, 2018 at 8:06 PM bhaskar 
wrote:

> Hi,
> I have Ignite 5 node cluster with more than dozen tables(Cache) . Our
> client
> are using SQL and Tableau. The node goes down when any client quries select
> * from table which is bigger than RAM size.
> we have 3 years data but last 2 months data is actively quried 80% of time.
>
> 1. How can I control such SQL when the rows are more than RAM size?
> 2. We have to control on the node not client as we can't conrol client.
>
> Thanks
> Bhaskar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Quick questions on Evictions

2018-06-19 Thread Andrey Mashenkov
Hi,

DataPageEvictionMode is about algorithm of choosing page to be replaced.
EvictionPolicy is what you are looking for. E.g. FifoEvictionPolicy or
LruEvictionPolicy.

It looks like EvictionPolicies can't be used with persistence as all of
them uses non-persistent structures to track cache entries.



On Tue, Jun 19, 2018 at 8:30 AM the_palakkaran  wrote:

> Hi,
>
> DataPageEvictionMode is deprecated now, right? What should I do to evict my
> off heap entries? Also, can I limit off heap memory usage?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Error while Starting Grid: javax.management.InstanceAlreadyExistsException: (LruEvictionPolicy)

2018-06-18 Thread Andrey Mashenkov
Possibly, it is already fixed.
Please, try to upgrade to the latest version.

On Fri, Jun 8, 2018 at 5:25 PM HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com> wrote:

> Hi Andrey,
>
> Thank you very much for the prompt response.
>
>
>
> We have only one node in a JVM.
>
>
>
>
>
> This is my grid config. We use 1.9.0 Ignite.
>
>
>
> IgniteConfiguration cfg = *new *IgniteConfiguration();
>
> cfg.setPeerClassLoadingEnabled(*false*);
>
> cfg.setLifecycleBeans(*new *LogLifecycleBean());
>
> TcpDiscoverySpi discoverySpi = *new *TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = *new *TcpDiscoveryVmIpFinder();
> Collection addressSet = *new *HashSet<>();
> *for *(String address : *ipList*) {
> addressSet.add(address);
> }
> ipFinder.setAddresses(addressSet);
>
> discoverySpi.setJoinTimeout(1);
> discoverySpi.setLocalPort(47500);
> discoverySpi.setIpFinder(ipFinder);
>
> cfg.setDiscoverySpi(discoverySpi);
>
>
>
> And this is the cache config. We don’t set cache group specifically.
>
>
>
> CacheConfiguration cc = *new *CacheConfiguration();
>
> cc.setName(*"mycache"*);
> cc.setBackups(1);
> cc.setCacheMode(CacheMode.*PARTITIONED*);
> cc.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
> LruEvictionPolicy evpl = *new *LruEvictionPolicy();
> evpl.setMaxSize(1);
>
> cc.setEvictionPolicy(evpl);
>
> cc.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(
> *new *Duration(TimeUnit.*SECONDS*, 15)));
>
> cc.setStatisticsEnabled(*true*);
>
>
>
>
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Friday, June 08, 2018 10:02 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while Starting Grid:
> javax.management.InstanceAlreadyExistsException: (LruEvictionPolicy)
>
>
>
> Hi,
>
>
>
> Looks like a bug.
>
>
>
> Can you share grid configuration?
>
> Do you have more than one node in same JVM?
>
> Do you have configure cache groups manually?
>
>
>
> On Fri, Jun 8, 2018 at 4:48 PM, HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com> wrote:
>
> Hi everyone,
>
>
>
> As a quick note on what we do here, we listen to NODE_FAILED, and
> NODE_SEGMENTED events and upon such events, we use Ignition.stopAll(true)
> and Ignition.start() to restart the Ignite grid in a given JVM. Here Ignite
> does  not starts as a standalone process by itself, but bootstrap
> programmatically since it’s meant to be a part of some other main process.
>
>
>
> So we received a NODE_FAILED evet and restarted Ignite where we see
> following error and start fails. And “mycache” is created with LRU
>  eviction policy at Ignite startup process.
>
>
>
> As per error, it tries to registering an LruEvictionPolicy MBean twice. We
> use a cache named mycache in PARTITIONED mode with 4 nodes in the cluster.
> Any idea for this behavior ?
>
>
>
>
>
> org.apache.ignite.IgniteException: Failed to register MBean for component:
> LruEvictionPolicy [max=10, batchSize=1, maxMemSize=524288000,
> memSize=0, size=0]
>at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:946)
>at org.apache.ignite.Ignition.start(Ignition.java:325)
>at com.test.IgniteRuntime.start(IgniteRuntime.java:87)
>at
> com.test.segmentation.SegmentationResolver.recycle(SegmentationResolver.java:61)
>at
> com.test.RandomizedDelayResolver.resolve(RandomizedDelayResolver.java:47)
>at
> com.test.SegmentationProcessor.lambda$init$2(SegmentationProcessor.java:95)
>at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to register
> MBean for component: LruEvictionPolicy [max=10, batchSize=1,
> maxMemSize=524288000, memSize=0, size=0]
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.registerMbean(GridCacheProcessor.java:3518)
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepare(GridCacheProcessor.java:557)
>at

Re: MSSQL as backend for cache

2018-06-13 Thread Andrey Mashenkov
Hi,

SAXParseException says a namespace was missed. Hope this [1] will helpful.


[1]
https://stackoverflow.com/questions/2897819/spring-using-static-final-fields-constants-for-bean-initialization

On Wed, Jun 13, 2018 at 1:19 PM, Michaelikus  wrote:

> it seems error was because of declaration of cacheStoreFactory absent
> After i added next block, situation changes.
>
> 
>  
>  
>  
>  
>  
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
>
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
>
> Now i got next error:
>
> class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
> application context [springUrl=file:/etc/apache-ignite/cache_mssql.xml,
> err=Line 101 in XML document from URL
> [file:/etc/apache-ignite/cache_mssql.xml] is invalid; nested exception is
> org.xml.sax.SAXParseException; lineNumber: 101; columnNumber: 111; The
> prefix "util" for element "util:constant" is not bound.]
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: And again... Failed to get page IO instance (page content is corrupted)

2018-06-13 Thread Andrey Mashenkov
Hi,

Possibly, it is a bug in partition eviction optimization. Ignite can skip
partition eviction procedure and remove partition instantly if there is no
indexes.

If it is so, you can try the latest ignite-2.5 version [1] or
as a workaround you can add index via configuring QueryEntity or via
setting cacheCfg.setIndexTypes().


[1] https://ignite.apache.org/download.cgi#binaries

On Wed, Jun 13, 2018 at 1:22 AM, Oleks K 
wrote:

> Hi guys,
>
> I got similar errors in 2.4.0
>
> First:
>
> org.apache.ignite.IgniteException: Runtime failure on bounds: [lower=null,
> upper=PendingRow []]
>   --> Caused by: java.lang.IllegalStateException: Failed to get page IO
> instance (page content is corrupted)
>
> Then lots of:
>
> org.apache.ignite.IgniteException: Runtime failure on bounds
>   --> Caused by: java.lang.IllegalStateException: Item not found: 3
>
> This was reproduced when I started and stopped server nodes under the load
> Topology: 2 server and 2 client nodes
> Java: 1.8.0_162
> OS: Windows Server 2012 R2 6.3 amd64
>
> Cache config:
> 
> 
>  value="auth_durable_region"/>
> 
>  value="FULL_ASYNC"/>
> 
> 
> 
> 
>
> Ignite team, can you comment on this please?
> How critical is the issue? What is the impact?
> Any workarounds? Fix planned?
>
> 2018-06-13 00:22:30.978 [exchange-worker-#42] INFO
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionDemander
> - Starting rebalancing [mode=ASYNC,
> fromNode=bdddfe24-aab3-46fa-9452-efe933783adb, partitionsCount=787,
> topology=AffinityTopologyVersion [topVer=5, minorTopVer=0], updateSeq=12]
> 2018-06-13 00:22:31.594 [ttl-cleanup-worker-#52] ERROR
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManag
> er
> - Runtime error caught during grid runnable execution: GridWorker
> [name=ttl-cleanup-worker, igniteInstanceName=null, finished=false,
> hashCode=473353699, interrupted=false, runner=ttl-cleanup-worker-#52]
> org.apache.ignite.IgniteException: Runtime failure on bounds: [lower=null,
> upper=PendingRow []]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree.find(BPlusTree.java:963)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree.find(BPlusTree.java:942)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.
> expire(IgniteCacheOffheapManagerImpl.java:974)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(
> GridCacheTtlManager.java:197)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheSharedTtlCleanupManag
> er$CleanupWorker.body(GridCacheSharedTtlCleanupManager.java:129)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> [ignite-core-2.4.0.jar:2.4.0]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_162]
> Caused by: java.lang.IllegalStateException: Failed to get page IO instance
> (page content is corrupted)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.io.IOVersions.forVersion(IOVersions.java:83)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.io.IOVersions.forPage(IOVersions.java:95)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.tree.
> PendingRow.initKey(PendingRow.java:72)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.tree.
> PendingEntriesTree.getRow(PendingEntriesTree.java:118)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.tree.
> PendingEntriesTree.getRow(PendingEntriesTree.java:31)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree$ForwardCursor.fillFromBuffer(BPlusTree.java:4614)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree$ForwardCursor.init(BPlusTree.java:4516)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.
> tree.BPlusTree$ForwardCursor.access$5300(BPlusTree.java:4455)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.
> 

Re: MSSQL as backend for cache

2018-06-13 Thread Andrey Mashenkov
Hi,

You've changed invalid URL property to another invalid one.
Anyway, I see no Ignite issues here.
Please, take a look how to configure MsSQL JDBC driver [1].


[1]
https://stackoverflow.com/questions/11206673/ms-sql-configuration-in-beans-xml

On Wed, Jun 13, 2018 at 10:50 AM, Michaelikus  wrote:

> Seems problem was not ni proterty i've used.
> I've change its name and now got same error
>
> /usr/share/apache-ignite/bin/ignite.sh -v /etc/apache-ignite/cache_
> mssql.xml
> Ignite Command Line Startup, ver. 2.5.0#20180523-sha1:86e110c7
> 2018 Copyright(C) Apache Software Foundation
>
> class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
> application context [springUrl=file:/etc/apache-ignite/cache_mssql.xml,
> err=Error creating bean with name 'mssqlserver' defined in URL
> [file:/etc/apache-ignite/cache_mssql.xml]: Error setting property values;
> nested exception is org.springframework.beans.
> NotWritablePropertyException:
> Invalid property 'my_random_property' of bean class
> [com.microsoft.sqlserver.jdbc.SQLServerDriver]: Bean property
> 'my_random_property' is not writable or has an invalid setter method. Does
> the parameter type of the setter match the return type of the getter?]
> at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.
> java:990)
> at org.apache.ignite.Ignition.start(Ignition.java:355)
> at
> org.apache.ignite.startup.cmdline.CommandLineStartup.
> main(CommandLineStartup.java:301)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> instantiate Spring XML application context
> [springUrl=file:/etc/apache-ignite/cache_mssql.xml, err=Error creating
> bean
> with name 'mssqlserver' defined in URL
> [file:/etc/apache-ignite/cache_mssql.xml]: Error setting property values;
> nested exception is org.springframework.beans.
> NotWritablePropertyException:
> Invalid property 'my_random_property' of bean class
> [com.microsoft.sqlserver.jdbc.SQLServerDriver]: Bean property
> 'my_random_property' is not writable or has an invalid setter method. Does
> the parameter type of the setter match the return type of the getter?]
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> applicationContext(IgniteSpringHelperImpl.java:392)
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> loadConfigurations(IgniteSpringHelperImpl.java:104)
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> loadConfigurations(IgniteSpringHelperImpl.java:98)
> at
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(
> IgnitionEx.java:744)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:945)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:854)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:724)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:693)
> at org.apache.ignite.Ignition.start(Ignition.java:352)
> ... 1 more
> Caused by: org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name 'mssqlserver' defined in URL
> [file:/etc/apache-ignite/cache_mssql.xml]: Error setting property values;
> nested exception is org.springframework.beans.
> NotWritablePropertyException:
> Invalid property 'my_random_property' of bean class
> [com.microsoft.sqlserver.jdbc.SQLServerDriver]: Bean property
> 'my_random_property' is not writable or has an invalid setter method. Does
> the parameter type of the setter match the return type of the getter?
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1568)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.populateBean(AbstractAutowireCapableBeanFactory.java:1276)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.doCreateBean(AbstractAutowireCapableBeanFactory.java:553)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFac
> tory.createBean(AbstractAutowireCapableBeanFactory.java:483)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory$1.
> getObject(AbstractBeanFactory.java:306)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.
> getSingleton(DefaultSingletonBeanRegistry.java:230)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(
> AbstractBeanFactory.java:302)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(
> AbstractBeanFactory.java:197)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.
> preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at
> org.springframework.context.support.AbstractApplicationContext.
> 

Re: RDBMS as backend for Ignite SQL

2018-06-09 Thread Andrey Mashenkov
Hi,

You can use only one backed store per cache.

I'm not sure, but very likely you will not be able to use cross-cache
transaction for caches backed by different stores.
Anyway, you can try.

On Sat, Jun 9, 2018 at 1:15 PM, Michaelikus  wrote:

> Is it possible to use various RDBMS as persisted backend for Ignite SQL ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: MSSQL as backend for cache

2018-06-09 Thread Andrey Mashenkov
Hi,

Looks like it is MsSql driver issue:

Error setting property values; nested exception is
org.springframework.beans.NotWritablePropertyException: Invalid property
'URL' of bean class [com.microsoft.sqlserver.jdbc.SQLServerDriver]:


Seemd, URL is generated by driver itself automatically.
Try to use other properties.

On Sat, Jun 9, 2018 at 12:13 PM, Michaelikus  wrote:

> Hi everybody!
>
> I'm try to config mssql as 3d party persistence for ignite 2.5
>
> downloaded mssql jdbc(sqljdbc42.jar) and placed in $IGNITE_HOME/libs
>
>
> 
>  value="jdbc:sqlserver://sql01:1433;databasename=ingnite-
> test1;integratedSecurity=true;user=test1;password=123321"/>
> 
>
> this is located before main ignite bean config.
>
>
> So when i start ignite i get this error:
>
> class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
> application context [springUrl=file:/etc/apache-ignite/cache_mssql.xml,
> err=Error creating bean with name 'MSSQL' defined in URL
> [file:/etc/apache-ignite/cache_mssql.xml]: Error setting property values;
> nested exception is org.springframework.beans.
> NotWritablePropertyException:
> Invalid property 'URL' of bean class
> [com.microsoft.sqlserver.jdbc.SQLServerDriver]: Bean property 'URL' is not
> writable or has an invalid setter method. Does the parameter type of the
> setter match the return type of the getter?]
> at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.
> java:990)
> at org.apache.ignite.Ignition.start(Ignition.java:355)
> at
> org.apache.ignite.startup.cmdline.CommandLineStartup.
> main(CommandLineStartup.java:301)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> instantiate Spring XML application context
> [springUrl=file:/etc/apache-ignite/cache_mssql.xml, err=Error creating
> bean
> with name 'MSSQL' defined in URL [file:/etc/apache-ignite/
> cache_mssql.xml]:
> Error setting property values; nested exception is
> org.springframework.beans.NotWritablePropertyException: Invalid property
> 'URL' of bean class [com.microsoft.sqlserver.jdbc.SQLServerDriver]: Bean
> property 'URL' is not writable or has an invalid setter method. Does the
> parameter type of the setter match the return type of the getter?]
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> applicationContext(IgniteSpringHelperImpl.java:392)
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> loadConfigurations(IgniteSpringHelperImpl.java:104)
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
> loadConfigurations(IgniteSpringHelperImpl.java:98)
> at
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(
> IgnitionEx.java:744)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:945)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:854)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:724)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:693)
> at org.apache.ignite.Ignition.start(Ignition.java:352)
> ... 1 more
>
>
> Can somebody help me to configure it or give a link on document ? :)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Failsafe re-entrant locks still held after a node exits.

2018-06-08 Thread Andrey Mashenkov
No. I see no message.

On Fri, Jun 8, 2018 at 7:27 PM, Jon Tricker  wrote:

> Do you see the message "ERROR ; Lock is not released on exit" printed by
> the code?
>
>
> I believe if things are working correctly it should not be.
>
>
> ----------
> *From:* Andrey Mashenkov 
> *Sent:* 08 June 2018 14:40:19
> *To:* user@ignite.apache.org
> *Subject:* Re: Failsafe re-entrant locks still held after a node exits.
>
> Hi,
>
> I've run your code several times and see no errors.
>
> On Fri, Jun 8, 2018 at 12:59 PM, Jon Tricker  wrote:
>
> Hi. I have been observing a problem on 2.5.0 where a node shuts down but
> locks held by it appear to still be held.
>
>
>
> Hopefully the attached demo code illustrates the issues (there is a
> detailed description in the files header comment).
>
>
>
> I have searched the archives and find similar bug reported but they are
> all quite old. I am guessing there is something missing from my
> IgniteConfiguration but an not sure what.
>
>
>
> Thanks for any assistance.
>
> The information in this e-mail and any attachments is confidential and may
> be legally privileged. It is intended solely for the addressee or
> addressees. Any use or disclosure of the contents of this
> e-mail/attachments by a not intended recipient is unauthorized and may be
> unlawful. If you have received this e-mail in error please notify the
> sender. Please note that any views or opinions presented in this e-mail are
> solely those of the author and do not necessarily represent those of
> TEMENOS. We recommend that you check this e-mail and any attachments
> against viruses. TEMENOS accepts no liability for any damage caused by any
> malicious code or virus transmitted by this e-mail.
>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>
> The information in this e-mail and any attachments is confidential and may
> be legally privileged. It is intended solely for the addressee or
> addressees. Any use or disclosure of the contents of this
> e-mail/attachments by a not intended recipient is unauthorized and may be
> unlawful. If you have received this e-mail in error please notify the
> sender. Please note that any views or opinions presented in this e-mail are
> solely those of the author and do not necessarily represent those of
> TEMENOS. We recommend that you check this e-mail and any attachments
> against viruses. TEMENOS accepts no liability for any damage caused by any
> malicious code or virus transmitted by this e-mail.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How are SQL Queries executed on Ignite Cluster

2018-06-08 Thread Andrey Mashenkov
1. It is not supported for now, but we plan do fix it [1]
2. Not yet. We split index different way. Every tree can manage certain
partition numbers. Feel the difference.
Mapping "partition -> tree segment" is static and we use as reminder of the
division, like partition_id%N.

Query works same way like if we'd have N times more nodes.

The issue here is we should "broadcast" query to all (nodes * N) segments
for even quite simple queries
if it is impossible to calculate query affinity.

Assume, you want to query for 1000 rows from node with parallelizm=32.
(Select * from T order by t.c1 limit 1000)
Actually, this query will retrieve 1k row per-index segment, it is 32k row
per node, and then 31k row will be just filtered out



[1] https://issues.apache.org/jira/browse/IGNITE-6089

On Fri, Jun 8, 2018 at 4:07 AM, Sanjeev  wrote:

> trying to understand this:
>
> 1) In case where no indexes are involved and you are doing a table scan, it
> should automatically try to exploit available CPU cores and process each
> partition on a separate thread/core. At least table scan queries should
> entertain the idea dynamic parallelism through DML hints.
>
> 2) In case of indexes, what you are saying is that N trees are build for M
> primary partitions on a node, where N being degree of parallelism. So each
> tree is managing a certain number of partitions, M/N. As number of
> partition
> on a nodes increase or decrease, the N trees are adjusted to reflect that.
>
> What I am wondering is in case if indexes were created, then could we
> always
> create N trees. What are the performance implications of:
> 1) A single thread working on 1 large single index
> 2) A single thread working on 1 or few of the N small indexes based on the
> query.
> 3) N cores working on N small indexes in parallel.
>
> 3 should always perform well. Between 1 and 2, would one perform better or
> worse.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite 2.5 | Can't restore memory - critical part of WAL archive is missing with walMode=NONE

2018-06-08 Thread Andrey Mashenkov
Hi Emmanuel,

Sorry for late answer.
I've found I check your test against master branch and test passes, but it
fails on 2.5 branch.
It looks like already fixed.

On Wed, Jun 6, 2018 at 3:47 PM, Emmanuel Marchand <
emmanuel.march...@exensa.com> wrote:

> I was wrong on the introduction of the exception. I guess it was added by
> a fix about IGNITE-8066
> 
> .
> Regards,
> ---
> Emmanuel.
>
>
> On 05/06/18 11:05, Emmanuel Marchand wrote:
>
> Hi,
>
> I'm testing v2.5 vs v2.4 for persisted dataregion with *walModel = NONE*
> and while performance seems better I failed to restart the cluster after
> what I think is a proper shutdown (using top -deactivate then kill -k from
> visor).
>
> When I try to reactivate the cluster (using top -activate from visor) I
> get the following exception on each nodes :
> [09:21:37,592][INFO][grid-nio-worker-tcp-comm-0-#33][TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/192.168.1.1:47100,
> rmtAddr=/192.168.1.102:44646]
> [09:21:37,656][INFO][pub-#92][GridClusterStateProcessor] Sending activate
> request with BaselineTopology null
> [09:21:37,659][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
> Received activate request with BaselineTopology: null
> [09:21:37,661][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
> Started state transition: true
> [09:21:37,687][INFO][exchange-worker-#52][time] Started exchange init
> [topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1], crd=true,
> evt=DISCOVERY_CUSTOM_EVT, evtNode=0f5d38b7-b748-4861-91ef-204ed9343e60,
> customEvt=ChangeGlobalStateMessage 
> [id=c0eeccec361-85ace6cb-d27e-4a0e-9106-ca39e6fcbfdd,
> reqId=5a1cf16e-f610-4b4b-b1eb-76078be38d6c, 
> initiatingNodeId=0f5d38b7-b748-4861-91ef-204ed9343e60,
> activate=true, baselineTopology=null, forceChangeBaselineTopology=false,
> timestamp=1528183297656], allowMerge=false]
> [09:21:37,688][INFO][exchange-worker-#52][GridDhtPartitionsExchangeFuture]
> Start activation process [nodeId=0f5d38b7-b748-4861-91ef-204ed9343e60,
> client=false, topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1]]
> [09:21:37,688][INFO][exchange-worker-#52][FilePageStoreManager] Resolved
> page store work directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
> Resolved write ahead log work directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/wal/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
> Resolved write ahead log archive directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/wal/archive/node00-bcfb4de5-5fc6-
> 41e9-9ebd-90b873711c19
> [09:21:37,690][WARNING][exchange-worker-#52][FileWriteAheadLogManager]
> Started write-ahead log manager in NONE mode, persisted data may be lost in
> a case of unexpected node failure. Make sure to deactivate the cluster
> before shutdown.
> [09:21:37,701][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=100.0 MiB, pages=24804, tableSize=1.9 MiB,
> checkpointBuffer=100.0 MiB]
> [09:21:37,798][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=8.0 GiB, pages=2032836, tableSize=158.1 MiB,
> checkpointBuffer=2.0 GiB]
> [09:21:37,800][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=100.0 MiB, pages=24804, tableSize=1.9 MiB,
> checkpointBuffer=100.0 MiB]
> [09:21:38,168][INFO][exchange-worker-#52][GridCacheDatabaseSharedManager]
> Read checkpoint status [startMarker=/usr/share/apache-ignite-fabric-2.5.0-
> bin/work/db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19/
> cp/1528182048551-ea54267c-22c4-4b64-b328-87cc09d3d460-START.bin,
> endMarker=/usr/share/apache-ignite-fabric-2.5.0-bin/work/
> db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19/cp/
> 1528182048551-ea54267c-22c4-4b64-b328-87cc09d3d460-END.bin]
> [09:21:38,169][INFO][exchange-worker-#52][GridCacheDatabaseSharedManager]
> Checking memory state [lastValidPos=FileWALPointer [idx=0, fileOff=0,
> len=0], lastMarked=FileWALPointer [idx=0, fileOff=0, len=0],
> lastCheckpointId=ea54267c-22c4-4b64-b328-87cc09d3d460]
> *[09:21:38,228][SEVERE][exchange-worker-#52][] Critical system error
> detected. Will be handled accordingly to configured handler [hnd=class
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException:
> Restore wal pointer = null, while status.endPtr = FileWALPointer [idx=0,
> fileOff=0, len=0]. Can't restore memory - critical part of WAL archive is
> missing.]]*
> *class org.apache.ignite.internal.pagemem.wal.StorageException: Restore
> wal pointer = null, while status.endPtr = FileWALPointer [idx=0, fileOff=0,
> len=0]. Can't restore memory - critical 

Re: Node pause for no obvious reason

2018-06-08 Thread Andrey Mashenkov
Possibly there is a race.
I've created a ticket for this [1]


[1] https://issues.apache.org/jira/browse/IGNITE-8751

On Fri, Jun 8, 2018 at 4:56 PM, Andrey Mashenkov  wrote:

> Hi,
>
> Looks like node was segmented due to long JVM pause.
> There are 2 "long JVM pause" messages in long an suspicious long
> checkpoint:
>
> Checkpoint finished [cpId=77cf2fa2-2a9f-48ea-bdeb-dda81b15dac1,
> pages=2050858, markPos=FileWALPointer [idx=2051, fileOff=38583904,
> len=15981], walSegmentsCleared=0, markDuration=920ms, pagesWrite=12002ms,
> fsync=965250ms, total=978172ms]
>
>
> Do you use HDD? Are you sure there is no swapping happens?
> Do you have enough free space on disk?
> Is there any other heavy process on server that may took too much CPU time
> and JVM was descheduled from CPU for too long?
>
>
> It looks weird JVM was not restarted. We've to check such case.
>
> On Fri, Jun 8, 2018 at 12:32 PM, Ray  wrote:
>
>> I setup a six node Ignite cluster to test the performance and stability.
>> Here's my setup.
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.configuration.DataStorageConfiguration">
>> 
>> > class="org.apache.ignite.configuration.DataRegionConfiguration">
>> 
>> 
>> 
>> 
>> > value="#{2L *
>> 1024 * 1024 * 1024}"/>
>> 
>> 
>> 
>> 
>> 
>>   
>>
>> And I used this command to start the Ignite node.
>> ./ignite.sh -J-Xmx32000m -J-Xms32000m -J-XX:+UseG1GC
>> -J-XX:+ScavengeBeforeFullGC -J-XX:+DisableExplicitGC -J-XX:+AlwaysPreTouch
>> -J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
>> -J-XX:+PrintAdaptiveSizePolicy -J-Xloggc:/ignitegc-$(date
>> +%Y_%m_%d-%H_%M).log  config/persistent-config.xml
>>
>> One of the node just dropped from the topology. Here's the log for last
>> three minutes before this node going down.
>> [08:39:58,982][INFO][grid-timeout-worker-#119][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>> ^-- Node [id=8333aa56, uptime=02:34:01.948]
>> ^-- H/N/C [hosts=9, nodes=16, CPUs=552]
>> ^-- CPU [cur=41%, avg=33.18%, GC=0%]
>> ^-- PageMemory [pages=8912687]
>> ^-- Heap [used=8942MB, free=72.05%, comm=32000MB]
>> ^-- Non heap [used=70MB, free=95.35%, comm=73MB]
>> ^-- Outbound messages queue [size=0]
>> ^-- Public thread pool [active=0, idle=0, qSize=0]
>> ^-- System thread pool [active=0, idle=6, qSize=0]
>> [08:40:51,945][INFO][db-checkpoint-thread-#178][GridCacheDat
>> abaseSharedManager]
>> Checkpoint finished [cpId=77cf2fa2-2a9f-48ea-bdeb-dda81b15dac1,
>> pages=2050858, markPos=FileWALPointer [idx=2051, fileOff=38583904,
>> len=15981], walSegmentsCleared=0, markDuration=920ms, pagesWrite=12002ms,
>> fsync=965250ms, total=978172ms]
>> [08:40:53,086][INFO][db-checkpoint-thread-#178][GridCacheDat
>> abaseSharedManager]
>> Checkpoint started [checkpointId=14d929ac-1b5c-4df2-a71f-002d5eb41f14,
>> startPtr=FileWALPointer [idx=2242, fileOff=65211837, len=15981],
>> checkpointLockWait=0ms, checkpointLockHoldTime=39ms,
>> walCpRecordFsyncDuration=720ms, pages=2110545, reason='timeout']
>> [08:40:57,793][INFO][data-streamer-stripe-1-#58][PageMemoryImpl]
>> Throttling
>> is applied to page modifications [percentOfPartTime=0.22, markDirty=7192
>> pages/sec, checkpointWrite=2450 pages/sec, estIdealMarkDirty=139543
>> pages/sec, curDirty=0.00, maxDirty=0.17, avgParkTime=1732784 ns, pages:
>> (total=2110545, evicted=0, written=875069, synced=0, cpBufUsed=92,
>> cpBufTotal=518215)]
>> [08:40:58,991][INFO][grid-timeout-worker-#119][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>> ^-- Node [id=8333aa56, uptime=02:35:01.957]
>> ^-- H/N/C [hosts=9, nodes=16, CPUs=552]
>> ^-- CPU [cur=9.3%, avg=33%, GC=0%]
>> ^-- PageMemory [pages=8920631]
>> ^-- Heap [used=13262MB, free=58.55%, comm=32000MB]
>> ^-- Non heap [used=70MB, free=95.34%, comm=73MB]
>> ^-- Outbound messages queue [size=0]
>> ^-- Public thread pool [active=0, idle=0, qSize=0]
>> ^-- System thread pool [active=0, idle=6, qSize=0]
>> [08:41:29,050][WARNING][jvm-pause-detector-worker][] Possible too long
>

Re: Error while Starting Grid: javax.management.InstanceAlreadyExistsException: (LruEvictionPolicy)

2018-06-08 Thread Andrey Mashenkov
Hi,

Looks like a bug.

Can you share grid configuration?
Do you have more than one node in same JVM?
Do you have configure cache groups manually?

On Fri, Jun 8, 2018 at 4:48 PM, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com> wrote:

> Hi everyone,
>
>
>
> As a quick note on what we do here, we listen to NODE_FAILED, and
> NODE_SEGMENTED events and upon such events, we use Ignition.stopAll(true)
> and Ignition.start() to restart the Ignite grid in a given JVM. Here Ignite
> does  not starts as a standalone process by itself, but bootstrap
> programmatically since it’s meant to be a part of some other main process.
>
>
>
> So we received a NODE_FAILED evet and restarted Ignite where we see
> following error and start fails. And “mycache” is created with LRU
>  eviction policy at Ignite startup process.
>
>
>
> As per error, it tries to registering an LruEvictionPolicy MBean twice. We
> use a cache named mycache in PARTITIONED mode with 4 nodes in the cluster.
> Any idea for this behavior ?
>
>
>
>
>
> org.apache.ignite.IgniteException: Failed to register MBean for
> component: LruEvictionPolicy [max=10, batchSize=1,
> maxMemSize=524288000, memSize=0, size=0]
>at org.apache.ignite.internal.util.IgniteUtils.
> convertException(IgniteUtils.java:946)
>at org.apache.ignite.Ignition.start(Ignition.java:325)
>at com.test.IgniteRuntime.start(IgniteRuntime.java:87)
>at com.test.segmentation.SegmentationResolver.recycle(
> SegmentationResolver.java:61)
>at com.test.RandomizedDelayResolver.resolve(
> RandomizedDelayResolver.java:47)
>at com.test.SegmentationProcessor.lambda$init$2(SegmentationProcessor.
> java:95)
>at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>at java.util.concurrent.ScheduledThreadPoolExecutor$
> ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to register
> MBean for component: LruEvictionPolicy [max=10, batchSize=1,
> maxMemSize=524288000, memSize=0, size=0]
>at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> registerMbean(GridCacheProcessor.java:3518)
>at org.apache.ignite.internal.processors.cache.
> GridCacheProcessor.prepare(GridCacheProcessor.java:557)
>at org.apache.ignite.internal.processors.cache.
> GridCacheProcessor.prepare(GridCacheProcessor.java:529)
>at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> createCache(GridCacheProcessor.java:1306)
>   at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> onKernalStart(GridCacheProcessor.java:801)
>at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:959)
>at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1799)
>at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1602)
>at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
>at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:569)
>at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:516)
>at org.apache.ignite.Ignition.start(Ignition.java:322)
>... 11 common frames omitted
> Caused by: javax.management.InstanceAlreadyExistsException:
> org.apache:clsLdr=764c12b6,group=mycache,name=org.apache.
> ignite.cache.eviction.lru.LruEvictionPolicy
>at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.
> registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.
> registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.
> registerObject(DefaultMBeanServerInterceptor.java:900)
>at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(
> DefaultMBeanServerInterceptor.java:324)
>at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(
> JmxMBeanServer.java:522)
>at org.apache.ignite.internal.util.IgniteUtils.registerCacheMBean(
> IgniteUtils.java:4523)
>at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> registerMbean(GridCacheProcessor.java:3514)
>... 22 common frames omitted
>
>
>
>
>
>
>
>
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Node pause for no obvious reason

2018-06-08 Thread Andrey Mashenkov
Hi,

Looks like node was segmented due to long JVM pause.
There are 2 "long JVM pause" messages in long an suspicious long checkpoint:

Checkpoint finished [cpId=77cf2fa2-2a9f-48ea-bdeb-dda81b15dac1,
pages=2050858, markPos=FileWALPointer [idx=2051, fileOff=38583904,
len=15981], walSegmentsCleared=0, markDuration=920ms, pagesWrite=12002ms,
fsync=965250ms, total=978172ms]


Do you use HDD? Are you sure there is no swapping happens?
Do you have enough free space on disk?
Is there any other heavy process on server that may took too much CPU time
and JVM was descheduled from CPU for too long?


It looks weird JVM was not restarted. We've to check such case.

On Fri, Jun 8, 2018 at 12:32 PM, Ray  wrote:

> I setup a six node Ignite cluster to test the performance and stability.
> Here's my setup.
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
>
> And I used this command to start the Ignite node.
> ./ignite.sh -J-Xmx32000m -J-Xms32000m -J-XX:+UseG1GC
> -J-XX:+ScavengeBeforeFullGC -J-XX:+DisableExplicitGC -J-XX:+AlwaysPreTouch
> -J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
> -J-XX:+PrintAdaptiveSizePolicy -J-Xloggc:/ignitegc-$(date
> +%Y_%m_%d-%H_%M).log  config/persistent-config.xml
>
> One of the node just dropped from the topology. Here's the log for last
> three minutes before this node going down.
> [08:39:58,982][INFO][grid-timeout-worker-#119][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=8333aa56, uptime=02:34:01.948]
> ^-- H/N/C [hosts=9, nodes=16, CPUs=552]
> ^-- CPU [cur=41%, avg=33.18%, GC=0%]
> ^-- PageMemory [pages=8912687]
> ^-- Heap [used=8942MB, free=72.05%, comm=32000MB]
> ^-- Non heap [used=70MB, free=95.35%, comm=73MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [08:40:51,945][INFO][db-checkpoint-thread-#178][
> GridCacheDatabaseSharedManager]
> Checkpoint finished [cpId=77cf2fa2-2a9f-48ea-bdeb-dda81b15dac1,
> pages=2050858, markPos=FileWALPointer [idx=2051, fileOff=38583904,
> len=15981], walSegmentsCleared=0, markDuration=920ms, pagesWrite=12002ms,
> fsync=965250ms, total=978172ms]
> [08:40:53,086][INFO][db-checkpoint-thread-#178][
> GridCacheDatabaseSharedManager]
> Checkpoint started [checkpointId=14d929ac-1b5c-4df2-a71f-002d5eb41f14,
> startPtr=FileWALPointer [idx=2242, fileOff=65211837, len=15981],
> checkpointLockWait=0ms, checkpointLockHoldTime=39ms,
> walCpRecordFsyncDuration=720ms, pages=2110545, reason='timeout']
> [08:40:57,793][INFO][data-streamer-stripe-1-#58][PageMemoryImpl]
> Throttling
> is applied to page modifications [percentOfPartTime=0.22, markDirty=7192
> pages/sec, checkpointWrite=2450 pages/sec, estIdealMarkDirty=139543
> pages/sec, curDirty=0.00, maxDirty=0.17, avgParkTime=1732784 ns, pages:
> (total=2110545, evicted=0, written=875069, synced=0, cpBufUsed=92,
> cpBufTotal=518215)]
> [08:40:58,991][INFO][grid-timeout-worker-#119][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=8333aa56, uptime=02:35:01.957]
> ^-- H/N/C [hosts=9, nodes=16, CPUs=552]
> ^-- CPU [cur=9.3%, avg=33%, GC=0%]
> ^-- PageMemory [pages=8920631]
> ^-- Heap [used=13262MB, free=58.55%, comm=32000MB]
> ^-- Non heap [used=70MB, free=95.34%, comm=73MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> [08:41:29,050][WARNING][jvm-pause-detector-worker][] Possible too long JVM
> pause: 22667 milliseconds.
> [08:41:29,050][INFO][tcp-disco-sock-reader-#11][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/10.29.41.23:32815, rmtPort=32815
> [08:41:29,052][INFO][tcp-disco-sock-reader-#13][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/10.29.41.25:46515, rmtPort=46515
> [08:41:30,063][INFO][data-streamer-stripe-3-#60][PageMemoryImpl]
> Throttling
> is applied to page modifications [percentOfPartTime=0.49, markDirty=26945
> pages/sec, checkpointWrite=2612 pages/sec, estIdealMarkDirty=210815
> pages/sec, curDirty=0.00, maxDirty=0.34, avgParkTime=1024456 ns, pages:
> (total=2110545, evicted=0, written=1861330, synced=0, cpBufUsed=8657,
> cpBufTotal=518215)]
> [08:42:42,276][WARNING][jvm-pause-detector-worker][] Possible too long JVM
> pause: 67967 milliseconds.
> [08:42:42,277][INFO][tcp-disco-msg-worker-#3][TcpDiscoverySpi] Local node
> 

Re: Ignite C# Geometry

2018-06-08 Thread Andrey Mashenkov
Hi,

Here is an example hot to create binary object [1]. I'm not sure it
will work, but you can try to pass full class name as BinaroObject
type. Smth like that:

IBinaryObjectBuilder builder =
ignite.GetBinary().GetBuilder("com.vividsolutions...");


[1] 
https://apacheignite-net.readme.io/docs/binary-mode#section-creating-binary-objects


On Thu, Jun 7, 2018 at 10:46 PM, Jonathan Mayer  wrote:

> Hi Andrew
>
> Thanks for the reply.
>
> " mock vividsolution classes" sounds interesting - is there an example of
> how to mock a class with BinaryObjects?
>
> Many Thanks
>
> Jonathan
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Failsafe re-entrant locks still held after a node exits.

2018-06-08 Thread Andrey Mashenkov
Hi,

I've run your code several times and see no errors.

On Fri, Jun 8, 2018 at 12:59 PM, Jon Tricker  wrote:

> Hi. I have been observing a problem on 2.5.0 where a node shuts down but
> locks held by it appear to still be held.
>
>
>
> Hopefully the attached demo code illustrates the issues (there is a
> detailed description in the files header comment).
>
>
>
> I have searched the archives and find similar bug reported but they are
> all quite old. I am guessing there is something missing from my
> IgniteConfiguration but an not sure what.
>
>
>
> Thanks for any assistance.
>
> The information in this e-mail and any attachments is confidential and may
> be legally privileged. It is intended solely for the addressee or
> addressees. Any use or disclosure of the contents of this
> e-mail/attachments by a not intended recipient is unauthorized and may be
> unlawful. If you have received this e-mail in error please notify the
> sender. Please note that any views or opinions presented in this e-mail are
> solely those of the author and do not necessarily represent those of
> TEMENOS. We recommend that you check this e-mail and any attachments
> against viruses. TEMENOS accepts no liability for any damage caused by any
> malicious code or virus transmitted by this e-mail.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite benchmarking on K8s cluster

2018-06-08 Thread Andrey Mashenkov
Hi,

What's actually wrong with yardstick benchmarks?


On Thu, May 24, 2018 at 10:34 AM, vbm  wrote:

> Hi,
>
> We are trying to use yardstick benchmark from gridgain
> (https://www.gridgain.com/resources/benchmarks/running-gridgain-benchmarks
> ).
> These tests are mainly for a non K8s cluster. (based on the logic used in
> script).
>
> Is there any test suite which does benchmark test on K8s cluster.
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: IgniteCache invoke CacheEntryProcessor, not throw EntryProcessorException

2018-06-07 Thread Andrey Mashenkov
Hi,

Yes, but no.
You will observe lower latency per operation on client and you will be able
to utilize more server resources with less number of threads.
But operation logic will be the same, no steps will be omit (backups will
be updated as well)... so, maximal performance that can be achieved will be
the same.


On Thu, Jun 7, 2018 at 7:05 PM, haotian.chen 
wrote:

> Awesome, that's super clear!
> One last question, will full_async have better performance over the other
> two usually?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: ignite peer class loading query

2018-06-07 Thread Andrey Mashenkov
1. Peer-classloading is designed for compute jobs. All other classes
(except may be user domain classes when BinaryObjects are used) must be in
classpath on all nodes.
2. Next is true for transactional cache. Client node must have a CacheStore
drivers as well as server nodes. This because a request-initiator node is
responsible to update cache store.
That means when you go put() from client, the client node will be
responsible for update underlying store.
May be this will be fixed at least for atomic caches in future versions,
but the requirement to have drivers on client doesn't look a serious issue.


On Thu, Jun 7, 2018 at 6:11 PM, David Harvey  wrote:

> Certain kinds of requests (e.g., remote calls) carry enough information
> for the receiver of the message to do peer class loading.   The main
> purpose of peer class loading is to avoid the need to restart the server to
> install software, and the peer class loading is therefore done by the
> server requesting the class from the client who sent the request.
>
> It is not targeted at what you are trying to do.
>
> On Thu, Jun 7, 2018 at 10:28 AM, vbm  wrote:
>
>> Hi,
>>
>> I have a Ignite server running with 3rd partyDB (MYSQL). I have the below
>> configuration in xml file:
>>
>> > class="org.springframework.jdbc.datasource.DriverManagerDataSource">
>>   
>>
>>   
>>   
>> 
>>
>> 
>> * *
>>   ...
>> 
>> 
>>
>> On the server the jar for mysql jdbc driver is present iand the servers
>> are
>> coming up successfully.
>> I have placed the mysql jdbc driver in a seperate folder and added it to
>> the
>> path using USER_LIBS env.
>>
>>
>> Now if I try to start a client, with the same the xml (only extra thing is
>> the client mode part), the client ignite node doesn't start and throws
>> below
>> error.
>>
>> * org.springframework.beans.MethodInvocationException: Property
>> 'driverClassName' threw exception; nested exception is
>> java.lang.IllegalStateException: Could not load JDBC driver class
>> [com.mysql.jdbc.Driver]*
>>
>> As peer class loading is enabled, I expect the client to get the class
>> from
>> the server. Why is this not happening ?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Cache getting cleareing automatically

2018-06-07 Thread Andrey Mashenkov
Ignite should support persistence format of older versions.
Also, please, take a look at Baseline topology feature that is available
from 2.4 version.


[1] https://apacheignite.readme.io/docs/baseline-topology

On Thu, Jun 7, 2018 at 5:35 PM, siva  wrote:

> If we switch to 2.5 version ,what about data ? I think there is no
> continuous
> upgrade in ignite
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Cache getting cleareing automatically

2018-06-07 Thread Andrey Mashenkov
Hi,

Try to switch to the latest Ignite 2.5 version as it includes many bugfixes
related to cluster stability.

On Thu, Jun 7, 2018 at 4:57 PM, siva  wrote:

> 1.we are using ignite 2.3 version ,
> 2.We are using an Api.Client node is a spring boot rest application ,we
> have
> written an api to perform curd operations and as well as clearing and
> destroying cache.Bellow is the configuration
>
> cache mode initially *Partitioned * .Later changed to Replicated mode
>
> CacheConfiguration cacheCfg = new
> CacheConfiguration<>(cacheName);
> cacheCfg.setCacheMode(CacheMode.REPLICATED); // Default.
> cacheCfg.setBackups(1);
> cacheCfg.setIndexedTypes(String.class, Entity.class);
> cacheCfg.setAtomicityMode(atomic);
> // client node will wait for write or commit on all
> particpating nodes
>
> cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.
> FULL_SYNC);
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Does equals method have relevance in Custom Key Class

2018-06-07 Thread Andrey Mashenkov
Yes. there are scenarios you can't use simple data types. This relates to
not only Ignite.

As Ignite SQL layer is built over key-value storage, there always be some
overhead on SQL operations.

Key-value API allow you operate with one or more cache entries with known
keys.
If the keys of entries are unknown then the only way to retrieve entries
efficiently is to use some index: either SQL or implement your own custom
IndexingSPI,




On Wed, Jun 6, 2018 at 3:36 PM, the_palakkaran  wrote:

> But you cannot use simple datatypes all the time in a real scenario,
> right? I
> mean when you need to get results based on multiple fields/columns [unless
> I
> form a key or something based on them]
>
> As per different discussions here in the forum and from some demo stubs, I
> understand performance of ignite in the order high to low will be :
>
> 1. Using Binary Objects direct get
> 2. Using normal serialiazable objects
> 3. Using Externalizable direct get
> 4. Using SQLField query [ queries will always be the one taking most
> performance, from my experience]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Cache getting cleareing automatically

2018-06-07 Thread Andrey Mashenkov
Hi,

What Ignite version do you use? Looks like it was fixed in 2.5 [1].
How are you create and destroy a cache? via API or SQL?


[1] https://issues.apache.org/jira/browse/IGNITE-8021

On Thu, Jun 7, 2018 at 4:32 PM, siva  wrote:

>
> Hi,
> We have two ignite server nodes in single physicle machine,and one client
> node.Dont know what happend cache getting clear automatically.Initially We
> have created a cache name called "*c091e548-b45a-49b4-b8ec-2cb5e27c7af6*"
> with "*partitioned*" mode.Later we destroy the same cache and configured as
> "*Replicated*" mode and storing entities under the same cache.But after
> some
> days cache got cleared and cache changed to partitioned mode automatically
> and again & again same problem facing every 10 min.What might be the issue?
> I have gone through the logs  and
>
>  found that nodes were  disconnected and again connected .I am wondering
> why
> cache again changed to partitioned mode from Replicated mode?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite 2.5 nodes do not rejoin the cluster after restart (works on 2.4)

2018-06-07 Thread Andrey Mashenkov
Hi,

What baseline topology does ./control.sh prints?
Is it possible, a node that out of baseline has started before baseline
node starts?

On Thu, Jun 7, 2018 at 9:54 AM, szj  wrote:

> Well, it definitely does work in 2.4. Please notice that there needs to be
> ignitevisorcmd.sh involved to trigger this bug (I didn't try with other
> clients though). Here's what is printed by Java on the console:
>
> [09:28:33]
> [09:28:33] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [09:28:33]
> [09:28:33] Ignite node started OK (id=ae8697ad)
> [09:28:33] Topology snapshot [ver=33, servers=2, clients=0, CPUs=4,
> offheap=2.1GB, heap=2.0GB]
> [09:28:33]   ^-- Node [id=AE8697AD-6421-4C0C-96FE-FC29ED9B6DCA,
> clusterState=ACTIVE]
> [09:28:33]   ^-- Baseline [id=7, size=2, online=2, offline=0]
> [09:28:33] Data Regions Configured:
> [09:28:33]   ^-- default [initSize=256.0 MiB, maxSize=1.4 GiB,
> persistenceEnabled=true]
> [09:29:25] Ignite node stopped OK [uptime=00:00:51.837]
> [09:29:35]__  
> [09:29:35]   /  _/ ___/ |/ /  _/_  __/ __/
> [09:29:35]  _/ // (7 7// /  / / / _/
> [09:29:35] /___/\___/_/|_/___/ /_/ /___/
> [09:29:35]
> [09:29:35] ver. 2.5.0#20180523-sha1:86e110c7
> [09:29:35] 2018 Copyright(C) Apache Software Foundation
> [09:29:35]
> [09:29:35] Ignite documentation: http://ignite.apache.org
> [09:29:35]
> [09:29:35] Quiet mode.
> [09:29:35]   ^-- Logging to file
> '/usr/share/apache-ignite/work/log/ignite-d484e6c6.0.log'
> [09:29:35]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
> [09:29:35]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
> [09:29:35]
> [09:29:35] OS: Linux 2.6.32-696.18.7.el6.x86_64 amd64
> [09:29:35] VM information: OpenJDK Runtime Environment 1.8.0_121-b13 Oracle
> Corporation OpenJDK 64-Bit Server VM 25.121-b13
> [09:29:35] Configured plugins:
> [09:29:35]   ^-- None
> [09:29:35]
> [09:29:35] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0]]
> [09:29:35] Message queue limit is set to 0 which may lead to potential
> OOMEs
> when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
> message queues growth on sender and receiver sides.
> [09:29:35] Security status [authentication=off, tls/ssl=off]
> [09:29:36,435][SEVERE][tcp-disco-msg-worker-#2][TcpDiscoverySpi]
> TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
> in order to prevent cluster wide instability.
> class org.apache.ignite.IgniteException: Node with BaselineTopology cannot
> join mixed cluster running in compatibility mode
> at
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.
> onGridDataReceived(GridClusterStateProcessor.java:714)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.
> onExchange(GridDiscoveryManager.java:883)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processNodeAddedMessage(ServerImpl.java:4354)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2744)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2536)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(
> ServerImpl.java:6775)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(
> ServerImpl.java:2621)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [09:29:36,437][SEVERE][tcp-disco-msg-worker-#2][] Critical system error
> detected. Will be handled accordingly to configured handler [hnd=class
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: Node
> with
> BaselineTopology cannot join mixed cluster running in compatibility mode]]
> class org.apache.ignite.IgniteException: Node with BaselineTopology cannot
> join mixed cluster running in compatibility mode
> at
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.
> onGridDataReceived(GridClusterStateProcessor.java:714)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.
> onExchange(GridDiscoveryManager.java:883)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processNodeAddedMessage(ServerImpl.java:4354)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2744)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2536)
> at
> 

Re: Ignite C# Geometry

2018-06-07 Thread Andrey Mashenkov
Hi,

I don't think  JST classe are supported in Ignite binary protocol.
You can try to use BinaryObjects for mock vividsolution classes or inline
in SQL query string.

On Thu, Jun 7, 2018 at 10:34 AM, Jonathan Mayer  wrote:

> Hi Guys,
>
> I realise that 2.5 has just been released and that everyone is busy but...
>
> If I can't get an answer to the below question it is a bit of a show
> stopper.
>
> Do I have to implement my own Data Type mapper for a JTS Geometry within
> the
> Ignite .Net Platform code or is there an easier way to access it from
> within
> C#?
>
> Cheers,
>
> Jonathan
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Node failure - Node out of topology (SEGMENTED)

2018-06-07 Thread Andrey Mashenkov
Hi,

Seems, there is a bug with IPv6 usage [1]. This has to be investigated.
Also, there is a discussion [2].

[1] https://issues.apache.org/jira/browse/IGNITE-6503
[2]
http://apache-ignite-developers.2346864.n4.nabble.com/Issues-if-Djava-net-preferIPv4Stack-true-is-not-set-td22372.html

On Wed, Jun 6, 2018 at 9:24 PM, naresh.goty  wrote:

> Thanks.
> We have enabled IPV4 JVM option in our non-production environment, found no
> issue reported on segmentation. Our main concern is, the issue is happening
> only in production, and we are very much interested in finding the real
> root
> cause (we can rule out - GC pauses, CPU spikes, network latencies as the
> cause is none of them).
>
> 1) please provide us with any useful tips in identifying the source of the
> problem, so that we can avoid the problem altogether instead of taking a
> remediation steps (of restarting JVM) if the issue happens.
>
> 2) do let us know if any timeout configurations should be increased to
> mitigate the problem?
>
> Regards
> Naresh
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: IgniteCache invoke CacheEntryProcessor, not throw EntryProcessorException

2018-06-07 Thread Andrey Mashenkov
Hi,

Looks like it is ok.

Ignite have next sync modes full_sync, primary_sync, full_Async.
You can detect failure from user thread if it happens on synchronous phase.

FULL_SYNC: cache operation return control to user thread after operation
has been applied on primary and all backups.
All exceptions occurs during operation will be propagated to user.

With primary_sync, cache operation return control to user thread after
operation has been applied on primary. Backups will be updated in async way.
Only exceptions which occurs during primary operation will be propagated to
user. Exceptions from backups will be ignored as nobody waits for backup
operation results.

With full_Ascync, cache operation return control to user thread immediately
after request has been sent to primary. All, Primary and backup will be
updates in async way.
Only exceptions, which occurs during sending request to primary node, will
be propagated to user. Others will be ignored as nobody waits for backup
operation results.



On Wed, Jun 6, 2018 at 7:24 PM, haotian.chen 
wrote:

> Hi Andrew,
>
> Thanks a lot for the response!
>
> While reproducing the case, I found that: If I set
> CacheWriteSynchronizatoinMode to FULL_ASYNC, EntryProcessorException is
> ignored. But if I set it PRIMARY_SYNC, then the EntryProcessorException
> will
> be thrown.
>
> Is it intentional that under FULL_ASYNC the exception will be ignored?
> Should it be that Exception will be thrown when the CacheEntryProcessor
> eventually ran?
>
> Here is the sample code:
>
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.*;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
>
> import javax.cache.processor.EntryProcessorException;
> import javax.cache.processor.MutableEntry;
>
> public class Main {
> public static void main(String[] args) {
>
> class IncrementOddEntryProcessor implements
> CacheEntryProcessor {
>
> public Object process(MutableEntry entry,
> Object... arguments) throws EntryProcessorException {
> Integer value = entry.getValue();
> if (value % 2 == 0) {
> throw new EntryProcessorException("can't operates on
> even number");
> } else {
> entry.setValue(value + 1);
> return null;
> }
> }
>
> }
>
> IgniteConfiguration igniteCfg = new IgniteConfiguration()
> .setGridLogger(new Slf4jLogger())
> .setIgniteInstanceName("testinstance");
>
> Ignite ignite = Ignition.getOrStart(igniteCfg);
>
> try {
> CacheConfiguration cacheCfg = new
> CacheConfiguration()
> .setName("testcache")
> .setAtomicityMode(CacheAtomicityMode.ATOMIC)
> .setRebalanceMode(CacheRebalanceMode.ASYNC)
>
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC)
>
> .setPartitionLossPolicy(PartitionLossPolicy.READ_WRITE_SAFE);
>
> IgniteCache cache =
> ignite.createCache(cacheCfg);
>
> cache.put("1", 1);
> cache.put("2", 2);
>
> cache.invoke("1", new IncrementOddEntryProcessor());
> cache.invoke("2", new IncrementOddEntryProcessor());
>
> System.out.println(cache.get("1"));
> System.out.println(cache.get("2"));
> } finally {
> ignite.close();
> }
>
>
> }
> }
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Cluster getting stuck when new node Join or release

2018-06-07 Thread Andrey Mashenkov
Hi,

It is ok if you kill client node. Grid will wait for
failureDetectionTimeout before drop failed node from topology.
All topology operations will stuck during that time as ignite nodes will
wait for answer from failed node until they detected failure.

On Thu, Jun 7, 2018 at 8:22 AM, Sambhaji Sawant 
wrote:

> An issue occurred when we abnormally stop Spark Java application which
> having Ignite client running inside that Spark context.So when we kill
> spark application its abnormally stop Ignite client and then when we
> restart our application and client try to connect with Ignite cluster then
> it getting stuck.
>
> On Mon, Jun 4, 2018 at 6:32 PM, dkarachentsev 
> wrote:
>
>> Hi,
>>
>> It's hard to get what's going wrong from your question.
>> Please attach full logs and thread dumps from all server nodes.
>>
>> Thanks!
>> -Dmitry
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-06 Thread Andrey Mashenkov
Hi,

Is it possible you change lambdas code between calls? Or may be classes are
differs on nodes?
Try to replace lambdas with static classes in your code. Will it work for
you?


On Tue, Jun 5, 2018 at 10:28 PM, Cong Guo  wrote:

> Hi,
>
>
>
> The stacktrace is as follows. Do I use the CacheEntryProcessor in the
> right way? May I have an example about how to use CacheEntryProcessor in
> StreamVisitor, please? Thank you!
>
>
>
> javax.cache.processor.EntryProcessorException:
> java.lang.ClassCastException: com.huawei.clusterexperiment.model.Person
> cannot be cast to org.apache.ignite.binary.BinaryObject
>
> at org.apache.ignite.internal.processors.cache.
> CacheInvokeResult.get(CacheInvokeResult.java:102)
>
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
>
> at org.apache.ignite.internal.processors.cache.
> IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
>
> at org.apache.ignite.internal.processors.cache.
> GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
>
> at com.huawei.clusterexperiment.Client.lambda$streamUpdate$
> 531c8d2f$1(Client.java:337)
>
> at org.apache.ignite.stream.StreamVisitor$1.apply(
> StreamVisitor.java:50)
>
> at org.apache.ignite.stream.StreamVisitor$1.apply(
> StreamVisitor.java:48)
>
> at org.apache.ignite.stream.StreamVisitor.receive(
> StreamVisitor.java:38)
>
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
>
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
>
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
>
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamProcessor.access$000(DataStreamProcessor.java:59)
>
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1555)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.processRegularMessage0(GridIoManager.java:1183)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$4200(GridIoManager.java:126)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoManager$9.run(GridIoManager.java:1090)
>
> at org.apache.ignite.internal.util.StripedExecutor$Stripe.
> run(StripedExecutor.java:505)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.ClassCastException: 
> com.huawei.clusterexperiment.model.Person
> cannot be cast to org.apache.ignite.binary.BinaryObject
>
> at com.huawei.clusterexperiment.Client$2.process(Client.java:340)
>
> at org.apache.ignite.internal.processors.cache.
> EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjector
> Proxy.java:68)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtTxPrepareFuture.onEntriesLocked(GridDhtTxPrepareFuture.java:
> 421)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1231)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:671)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1048)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:3452)
>
> at org.apache.ignite.internal.processors.cache.transactions.
> IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:257)
>
> at org.apache.ignite.internal.processors.cache.distributed.near.
> GridNearOptimisticTxPrepareFuture.proceedPrepare(
> GridNearOptimisticTxPrepareFuture.java:578)
>
> at org.apache.ignite.internal.processors.cache.distributed.near.
> GridNearOptimisticTxPrepareFuture.prepareSingle(
> GridNearOptimisticTxPrepareFuture.java:405)
>
> at org.apache.ignite.internal.processors.cache.distributed.near.
> GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFut
> ure.java:348)
>
> at org.apache.ignite.internal.processors.cache.distributed.near.
> GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(
> GridNearOptimisticTxPrepareFutureAdapter.java:137)
>
> at org.apache.ignite.internal.processors.cache.distributed.near.
> GridNearOptimisticTxPrepareFutureAdapter.prepare(
> GridNearOptimisticTxPrepareFutureAdapter.java:74)
>
> at org.apache.ignite.internal.processors.cache.distributed.
> 

Re: IgniteCache invoke CacheEntryProcessor, not throw EntryProcessorException

2018-06-06 Thread Andrey Mashenkov
Hi,

Would you please share a full config? or reproducer if possible?


On Tue, Jun 5, 2018 at 9:56 PM, haotian.chen 
wrote:

> Hi Developers,
>
> I am not sure if I understand the IgniteCache invoke method API correctly
> here:
> https://ignite.apache.org/releases/latest/javadoc/org/
> apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.
> CacheEntryProcessor-java.lang.Object...-
>  apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.
> CacheEntryProcessor-java.lang.Object...->
> .
>
> I wrote a CacheEntryProcessor. In CacheEntryProcessor's process method, it
> will detect any IllegalArgumentException of input and wrap the exception as
> EntryProcessorException, and then throw it.
>
> However, when a client invokes such CacheEntryProcessor and expects an
> exception to be thrown, ignite silently ignores the exception and
> proceeds.
>
> I am under ATOMIC and PARTITION mode.
>
> Is this behavior expected?
> Could I make sure that the client get an exception if the CacheEntryProssor
> fails?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Does equals method have relevance in Custom Key Class

2018-06-06 Thread Andrey Mashenkov
Hi,

1. Yes, use simple types if it is possible. These types has much smaller
footprint than POJOs. Ignite has performance optimization for simple types
(and some other JDK classes like String).
Ignite use BinaryObjects [1] concept that allows to query certain
user-object's field without deserialization of whole object, but this costs
some memory overhead.

2. See [2] for details. Inline size is a portion of key\column which will
be inlined into index tree.
So, when field value fit to inline size then Ignite will be able to
comparison instantly when scaning\updateing index.
Otherwise Ignite will go for data page (where entry actually resides) to
read field whole value for further comparison.

Proper inline size may significantly speed-up SQL index scans and updates
operations.


[1] https://apacheignite.readme.io/docs/binary-marshaller#basic-concepts
[2]
https://apacheignite-sql.readme.io/docs/create-index#section-index-inlining


On Wed, Jun 6, 2018 at 2:03 PM, the_palakkaran  wrote:

> Hi Andrew,
>
> Thanks for the reply.
>
> I have two more doubts:
>
> 1. So rather than a Custom key, you are suggesting to use simple types like
> Integer, Long, etc, right? Why is this so?
>
> 2. What does indices with proper inline size mean ? Also, more indices does
> not necessarily give better performance right?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite 2.5 nodes do not rejoin the cluster after restart

2018-06-06 Thread Andrey Mashenkov
HI,

Is it possible there are nodes out of baseline started ans node with
baseline is able to discover them?

On Wed, Jun 6, 2018 at 9:48 AM, szj  wrote:

> I also tested an upgrade of the PoC 2-node cluster running Ignite 2.4 to
> 2.5.
> Both nodes shut down, upgraded, started on node1, started on node2, cluster
> looking healthy with both nodes ONLINE. Then I shut down one of the nodes
> with "kill -k -al" using batch ignitevisorcmd.sh. Trying to start it brings
> back the good old
>
> class org.apache.ignite.IgniteException: Node with BaselineTopology cannot
> join mixed cluster running in compatibility mode
> at
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.
> onGridDataReceived(GridClusterStateProcessor.java:714)
> at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.
> onExchange(GridDiscoveryManager.java:883)
> at
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.
> onExchange(TcpDiscoverySpi.java:1939)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processNodeAddedMessage(ServerImpl.java:4354)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2744)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.
> processMessage(ServerImpl.java:2536)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(
> ServerImpl.java:6775)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(
> ServerImpl.java:2621)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>
>
> Amazingly when I kicked the node out of the baseline, started it (then it
> does start), added back to the baseline and killed the Java process and
> ignite.sh with the Linux kill command (as mentioned I had to try it on a
> system without systemd) the node DID start (!?).
>
> That made me thing that it has something to do with the ignitevisorcmd.sh.
> What I did I then started ignitevisorcmd.sh on node1 and connected it,
> killed ignite (by killing the process) on node2 and bang! - it would not
> start again with the "mixed cluster running in compatibility mode" garbage.
>
> So my conclusion is that if you restart a node when ignitevisorcmd.sh is
> connected to the mesh on any node (be that the restarted one or any other),
> then you will get the "Node with BaselineTopology cannot join mixed cluster
> running in compatibility mode" error and your node won't start. My
> knowledge
> of Ignite is poor but I think it must have something to do with ignitevisor
> being a kind of a node too. But in that case would any client node
> connected
> cause the same problem? I didn't try - didn't get that far.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How Ignite performs cache.get() operation when hashCode is overridden ?

2018-06-06 Thread Andrey Mashenkov
Duplicate.
Answeder on SO [1]


[1]
https://stackoverflow.com/questions/50714462/how-ignite-performs-cache-get-operation-when-hashcode-is-overridden

On Wed, Jun 6, 2018 at 10:03 AM, the_palakkaran 
wrote:

> 
>
>
> See the image attached. I have put keys into the cache using the equals
> method as follows:
>
>
> @Override
>   public boolean equals(Object obj) {
> HierarchyMasterKey hierarchyMasterKey = (HierarchyMasterKey) obj;
> return equalTo(this.hmCustNo, hierarchyMasterKey.hmCustNo) &&
>   equalTo(this.hmFromDate, hierarchyMasterKey.hmFromDate) &&
>   equalTo(this.hmParentCustNo,
> hierarchyMasterKey.hmParentCustNo) &&
>   equalTo(this.hmActNo, hierarchyMasterKey.hmActNo);
>   }
>
> (equalTo method is basically null safe equals check.)
>
> and hashCode is computed as below:
>
> @Override
> public int hashCode() {
> return Objects.hash(hmCustNo,hmActNo);
> }
>
>
> When I try to get from it,
> the equals method won't get executed. Why is this so? How does ignite get
> the key without executing equals?
>
> [I have a cache that has a HierarchyMasterKey and a list of HierarchyMaster
> as values]
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Does equals method have relevance in Custom Key Class

2018-06-06 Thread Andrey Mashenkov
Hi,

1. There are many ways to improve SQL query performance.
The basic rules are to use simple types (primitives, String, UUID and some
other JDK classes),
use inices with proper inline size to avoid unnecessary row lookups, use
right collocation.

2. This will not work. Ignite fully relies on internal binary object
comparison methods rather than 'equals()' method.
Get operation is key-value API operation that expects key is unique for
each entry.
So, get() always return a single value.

Why you think your way is workable? Have you tried to validate it with JDK
HashMap?

On Tue, Jun 5, 2018 at 8:37 AM, the_palakkaran  wrote:

> The SQL queries are not getting me performance that I want. So I was trying
> for alternative methods to handle the same using get.
>
> Like getting the data using cache.get() and filtering them with the sql
> operations that I actually wanted to do.
>
> Suppose I have an equals method which will have two return cases working
> based on a field value "Y" or "N". So there are basically two cases in
> equals which will get executed based on the field value. That way I can get
> results differently, right?
>
> for example in a customer key class, I have a customer id and a customer
> number fields. In my equals method I have something like this:
>
> if("Y".equals(field))
> return (null != this.custNo && null != custNo &&
> this.custNo.compareTo(CustomerMaster.custNo) == 0)
> else
> return (null != this.custId && null != custId &&
> this.custId.compareTo(CustomerMaster.custId) == 0)
>
> This would fetch me records differently, right?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite client mvn artifact's purpose?

2018-06-05 Thread Andrey Mashenkov
Hi,

Answered on SO, please take a look [1].

[1]
https://stackoverflow.com/questions/50691453/ignite-client-mvn-artifacts-purpose

On Sun, Jun 3, 2018 at 2:26 PM, mlekshma  wrote:

> Hi Folks,
>
> I notice there is a mvn artifact with artifact id "ignite-clients". What is
> its purpose?
>
> https://mvnrepository.com/artifact/org.apache.ignite/ignite-clients/2.4.0
> 
>
>
> Thanks
> Muthu
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: support for Scala Collection as key?

2018-06-05 Thread Andrey Mashenkov
Hi,

Most likely, Scala Vector can't be used as a key as-is due to it's
implementation [1].
I'm not sure, if k1 and k2 scala objects have same internal structure to
Ignite can serialize them to same binary representation.


[1] https://stackoverflow.com/questions/20612729/how-does-scalas-vector-work

On Tue, Jun 5, 2018 at 6:41 PM, haotian.chen 
wrote:

> Got it. I thought IgniteCache converts key to BinaryObject and then
> compares
> them, and therefore gave the example.
>
> However, if I put key k1 with a value into IgniteCache, and retrieve the
> value using k2, I won't be able to find the entry. Do you know what's the
> process behind this process?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Doubts regarding using externalizable in model classes

2018-06-05 Thread Andrey Mashenkov
Hi,

It is a duplicate of StackOverflow question [1].
Please, find an answer on SO via link provided.

[1]
https://stackoverflow.com/questions/50697634/using-externalizable-in-apache-ignite

On Tue, Jun 5, 2018 at 8:30 AM, the_palakkaran  wrote:

> I have 3 doubts,
>
> 1. My model classes are externalizable. Still they can be queried using
> SQLFieldsQuery at server node without any problem right?
>
> 2. Externalizable items cannot be queried from a client node in remote,
> right? Is there a way to make it happen at client ?
>
> 3. Are there any other limitations using externalizable or any performance
> concerns?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite 2.5 | Can't restore memory - critical part of WAL archive is missing with walMode=NONE

2018-06-05 Thread Andrey Mashenkov
Hi,

I've make a simple test and still can't reproduce this.

Would you please take a look if I missed smth?
Is it possible you have more than one region and\or cache?



On Tue, Jun 5, 2018 at 5:41 PM, Emmanuel Marchand <
emmanuel.march...@exensa.com> wrote:

> Hi,
>
> I'm not sure I can provide a reproducer, but here is some informations :
>
>- configuration file attached
>- 2 server nodes, 1 client (+ visor)
>- I'm streaming enough data to trigger a checkpoint with a reason 'too
>many dirty pages'
>   - [INFO][db-checkpoint-thread-#69][GridCacheDatabaseSharedManager]
>   Checkpoint started [checkpointId=225ef67d-2850-499f-860d-f7868f1f73ec,
>   startPtr=FileWALPointer [idx=0, fileOff=0, len=0],
>   checkpointLockWait=151ms, checkpointLockHoldTime=17ms,
>   walCpRecordFsyncDuration=0ms, pages=1508362, reason='too many dirty 
> pages']
>   - no error occurs
>- deactivate cluster then stop nodes using visor
>- restart nodes
>- activate cluster using visor -> crash
>
> Workaround : if I delete (or rename) the checkpoint folder (which is *not*
> empty), the activation completes successfully.
> Regards,
> ---
> Emmanuel.
>
>
>
> On 05/06/18 15:40, Andrey Mashenkov wrote:
>
> Hi,
>
> I can't reproduce the issue.
> Is it possible grid configuration was changed between runs?
> Is it possible to share a reproducer?
>
>
> On Tue, Jun 5, 2018 at 12:05 PM, Emmanuel Marchand <
> emmanuel.march...@exensa.com> wrote:
>
>> Hi,
>>
>> I'm testing v2.5 vs v2.4 for persisted dataregion with *walModel = NONE*
>> and while performance seems better I failed to restart the cluster after
>> what I think is a proper shutdown (using top -deactivate then kill -k from
>> visor).
>>
>> When I try to reactivate the cluster (using top -activate from visor) I
>> get the following exception on each nodes :
>> [09:21:37,592][INFO][grid-nio-worker-tcp-comm-0-#33][TcpCommunicationSpi]
>> Accepted incoming communication connection [locAddr=/192.168.1.1:47100,
>> rmtAddr=/192.168.1.102:44646]
>> [09:21:37,656][INFO][pub-#92][GridClusterStateProcessor] Sending
>> activate request with BaselineTopology null
>> [09:21:37,659][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
>> Received activate request with BaselineTopology: null
>> [09:21:37,661][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
>> Started state transition: true
>> [09:21:37,687][INFO][exchange-worker-#52][time] Started exchange init
>> [topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1], crd=true,
>> evt=DISCOVERY_CUSTOM_EVT, evtNode=0f5d38b7-b748-4861-91ef-204ed9343e60,
>> customEvt=ChangeGlobalStateMessage 
>> [id=c0eeccec361-85ace6cb-d27e-4a0e-9106-ca39e6fcbfdd,
>> reqId=5a1cf16e-f610-4b4b-b1eb-76078be38d6c,
>> initiatingNodeId=0f5d38b7-b748-4861-91ef-204ed9343e60, activate=true,
>> baselineTopology=null, forceChangeBaselineTopology=false,
>> timestamp=1528183297656], allowMerge=false]
>> [09:21:37,688][INFO][exchange-worker-#52][GridDhtPartitionsExchangeFuture]
>> Start activation process [nodeId=0f5d38b7-b748-4861-91ef-204ed9343e60,
>> client=false, topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1]]
>> [09:21:37,688][INFO][exchange-worker-#52][FilePageStoreManager] Resolved
>> page store work directory: /usr/share/apache-ignite-fabri
>> c-2.5.0-bin/work/db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
>> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
>> Resolved write ahead log work directory: /usr/share/apache-ignite-fabri
>> c-2.5.0-bin/work/db/wal/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
>> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
>> Resolved write ahead log archive directory: /usr/share/apache-ignite-fabri
>> c-2.5.0-bin/work/db/wal/archive/node00-bcfb4de5-5fc6-41e9-
>> 9ebd-90b873711c19
>> [09:21:37,690][WARNING][exchange-worker-#52][FileWriteAheadLogManager]
>> Started write-ahead log manager in NONE mode, persisted data may be lost in
>> a case of unexpected node failure. Make sure to deactivate the cluster
>> before shutdown.
>> [09:21:37,701][INFO][exchange-worker-#52][PageMemoryImpl] Started page
>> memory [memoryAllocated=100.0 MiB, pages=24804, tableSize=1.9 MiB,
>> checkpointBuffer=100.0 MiB]
>> [09:21:37,798][INFO][exchange-worker-#52][PageMemoryImpl] Started page
>> memory [memoryAllocated=8.0 GiB, pages=2032836, tableSize=158.1 MiB,
>> checkpointBuffer=2.0 GiB]
>> [09:21:37,800][INFO][exchange-worker-#52][PageMemoryImpl] Started page
>> memory [memoryAllocated=100.0 MiB, pages=24804, tabl

Re: support for Scala Collection as key?

2018-06-05 Thread Andrey Mashenkov
Hi,

Ignite doesn't rely on BinaryObject hashcode or equals method, it use
internal comparison logic instead.
So, it is ok BinaryObject.equals() return true if compared with same object
only.

As Ignite have no hooks for scala collections, they will be handled as
regular user objects.
Let us know if you observer any issues related to incorrect scala
collection objects handling.


AFAIK, there is no plans to add support for scala collections to binary
protocol.
It doesn't looks like trivial task, but anyway feel free to contribute.

On Tue, Jun 5, 2018 at 5:22 PM, haotian.chen 
wrote:

> Thanks a lot for the wonderful development on Ignite. I am wondering if
> there
> is any plan to support Scala Collection (e.g. Vector) as key?
>
> There is support for common Java Collection, see here:
> https://github.com/apache/ignite/blob/d4ae653d8018e88847425111321e65
> 3bd558a973/modules/core/src/main/java/org/apache/ignite/
> internal/binary/BinaryContext.java#L320
>
> Scala List seems working fine for me (did not thoroughly test it), but not
> vector. A simple demo here:
>
> val bin = ignite().binary()
> val k1 = Vector(“1”, “2”)
> val k2 = Vector(“1”) ++ Vector(“2”)
> k1 == k2 // true
> binary.toBinary(k1) == binary.toBinary(k2) // false
>
>
> Thanks!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Updating the same entry multiple time in Ignite transaction

2018-06-05 Thread Andrey Mashenkov
Hi,

Do you mean you put entry in one thread and delete in another?
If so, it is ok.

Seems, you have scenario like this one:
1. Put-thread remembers entry version at first read and applies update
operation only when commit() method has been called.
2. Delete-thread starts separate implicit optimistic transaction that
removes entry in-between and change entry version.

Actually, Put-thread will not lock an entry on read, but on commit. This is
how Optimistic transaction works.
Seems, the Exception you saw is caused by concurrent transactional
operation that occurred right before Put-tx commit, but after Put-tx read
the entry.


On Tue, Jun 5, 2018 at 4:41 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> I observed one more behavior. I am using executor service to execute the
> tasks.
> When I submit a task to remove entries from cache, I get transaction
> optimistic exception.
> But instead of submitting the task, if I just call task.run() method it
> works fine.
>
>
> Thanks,
> Prasad
>
> On Tue, Jun 5, 2018 at 7:07 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> We'll check this case.
>> Please, share a reproducer if possible.
>>
>> On Tue, Jun 5, 2018 at 3:53 PM, Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> In my case only one transaction is in progress. In this transaction I
>>> update few entries in cache and then re-read these entries and updates it
>>> again in some other method.
>>> When I read the entry after first update operations, transaction entry
>>> read version(IgniteTxEntry-entryReadVersion) changes. This version is
>>> validated in GridDhtTxPrepareFuture::checkReadConflict method.
>>>
>>> But I don't understand why transaction fails if the entry is modified
>>> after read operation in same transaction?
>>>
>>> Thanks,
>>> Prasad
>>>
>>> On Tue, Jun 5, 2018 at 6:07 PM, Andrey Mashenkov <
>>> andrey.mashen...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> It is ok, Optimistic transaction failed on commit
>>>> with TransactionOptimisticException due to write\read conflict.
>>>> See javadoc [1] and documentation [2] for details.
>>>>
>>>> Javadoc says:
>>>> *  However, in {@link TransactionConcurrency#OPTIMISTIC} mode, if some
>>>> transactions cannot be serially isolated
>>>>  *  from each other, then one winner will be picked and the other
>>>> transactions in conflict will result in
>>>>  * {@link TransactionOptimisticException} being thrown.
>>>>
>>>> [1] https://github.com/apache/ignite/blob/master/modules/cor
>>>> e/src/main/java/org/apache/ignite/transactions/Transaction.java
>>>> [2] https://apacheignite.readme.io/docs/transactions#section
>>>> -optimistic-transactions
>>>>
>>>> On Tue, Jun 5, 2018 at 3:19 PM, Prasad Bhalerao <
>>>> prasadbhalerao1...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>> I tried this with simple code and it works fine.
>>>>> But in my application when I do similar thing I get following
>>>>> exception when I do commit. I am getting transaction as follows.
>>>>>
>>>>> IgniteTransactions igniteTx = getServiceContext().getGridSer
>>>>> vice().getTransaction();
>>>>> try (Transaction transaction = igniteTx
>>>>> .txStart(TransactionConcurrency.OPTIMISTIC,
>>>>> TransactionIsolation.SERIALIZABLE)) {
>>>>>
>>>>> Can you please advise?
>>>>>
>>>>> Caused by: org.apache.ignite.transactions
>>>>> .TransactionOptimisticException: Failed to prepare transaction,
>>>>> read/write conflict [key=DefaultDataAffinityKey{id=1556524,
>>>>> affinityId=1}, keyCls=com.qls.agms.grid.data.key.DefaultDataAffinityKey,
>>>>> val=null, cache=IPV4_ASSET_GROUP_DETAIL_CACHE, thread=IgniteThread
>>>>> [compositeRwLockIdx=7, stripe=-1, plc=0, name=pub-#53%springDataNode%]]
>>>>> at org.apache.ignite.internal.uti
>>>>> l.IgniteUtils$14.apply(IgniteUtils.java:895)
>>>>> at org.apache.ignite.internal.uti
>>>>> l.IgniteUtils$14.apply(IgniteUtils.java:893)
>>>>> at org.apache.ignite.internal.uti
>>>>> l.IgniteUtils.convertException(IgniteUtils.java:975)
>>>>>

Re: Ignite 2.5 | Can't restore memory - critical part of WAL archive is missing with walMode=NONE

2018-06-05 Thread Andrey Mashenkov
Hi,

I can't reproduce the issue.
Is it possible grid configuration was changed between runs?
Is it possible to share a reproducer?


On Tue, Jun 5, 2018 at 12:05 PM, Emmanuel Marchand <
emmanuel.march...@exensa.com> wrote:

> Hi,
>
> I'm testing v2.5 vs v2.4 for persisted dataregion with *walModel = NONE*
> and while performance seems better I failed to restart the cluster after
> what I think is a proper shutdown (using top -deactivate then kill -k from
> visor).
>
> When I try to reactivate the cluster (using top -activate from visor) I
> get the following exception on each nodes :
> [09:21:37,592][INFO][grid-nio-worker-tcp-comm-0-#33][TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/192.168.1.1:47100,
> rmtAddr=/192.168.1.102:44646]
> [09:21:37,656][INFO][pub-#92][GridClusterStateProcessor] Sending activate
> request with BaselineTopology null
> [09:21:37,659][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
> Received activate request with BaselineTopology: null
> [09:21:37,661][INFO][tcp-disco-msg-worker-#3][GridClusterStateProcessor]
> Started state transition: true
> [09:21:37,687][INFO][exchange-worker-#52][time] Started exchange init
> [topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1], crd=true,
> evt=DISCOVERY_CUSTOM_EVT, evtNode=0f5d38b7-b748-4861-91ef-204ed9343e60,
> customEvt=ChangeGlobalStateMessage 
> [id=c0eeccec361-85ace6cb-d27e-4a0e-9106-ca39e6fcbfdd,
> reqId=5a1cf16e-f610-4b4b-b1eb-76078be38d6c, 
> initiatingNodeId=0f5d38b7-b748-4861-91ef-204ed9343e60,
> activate=true, baselineTopology=null, forceChangeBaselineTopology=false,
> timestamp=1528183297656], allowMerge=false]
> [09:21:37,688][INFO][exchange-worker-#52][GridDhtPartitionsExchangeFuture]
> Start activation process [nodeId=0f5d38b7-b748-4861-91ef-204ed9343e60,
> client=false, topVer=AffinityTopologyVersion [topVer=69, minorTopVer=1]]
> [09:21:37,688][INFO][exchange-worker-#52][FilePageStoreManager] Resolved
> page store work directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
> Resolved write ahead log work directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/wal/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19
> [09:21:37,689][INFO][exchange-worker-#52][FileWriteAheadLogManager]
> Resolved write ahead log archive directory: /usr/share/apache-ignite-
> fabric-2.5.0-bin/work/db/wal/archive/node00-bcfb4de5-5fc6-
> 41e9-9ebd-90b873711c19
> [09:21:37,690][WARNING][exchange-worker-#52][FileWriteAheadLogManager]
> Started write-ahead log manager in NONE mode, persisted data may be lost in
> a case of unexpected node failure. Make sure to deactivate the cluster
> before shutdown.
> [09:21:37,701][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=100.0 MiB, pages=24804, tableSize=1.9 MiB,
> checkpointBuffer=100.0 MiB]
> [09:21:37,798][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=8.0 GiB, pages=2032836, tableSize=158.1 MiB,
> checkpointBuffer=2.0 GiB]
> [09:21:37,800][INFO][exchange-worker-#52][PageMemoryImpl] Started page
> memory [memoryAllocated=100.0 MiB, pages=24804, tableSize=1.9 MiB,
> checkpointBuffer=100.0 MiB]
> [09:21:38,168][INFO][exchange-worker-#52][GridCacheDatabaseSharedManager]
> Read checkpoint status [startMarker=/usr/share/apache-ignite-fabric-2.5.0-
> bin/work/db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19/
> cp/1528182048551-ea54267c-22c4-4b64-b328-87cc09d3d460-START.bin,
> endMarker=/usr/share/apache-ignite-fabric-2.5.0-bin/work/
> db/node00-bcfb4de5-5fc6-41e9-9ebd-90b873711c19/cp/
> 1528182048551-ea54267c-22c4-4b64-b328-87cc09d3d460-END.bin]
> [09:21:38,169][INFO][exchange-worker-#52][GridCacheDatabaseSharedManager]
> Checking memory state [lastValidPos=FileWALPointer [idx=0, fileOff=0,
> len=0], lastMarked=FileWALPointer [idx=0, fileOff=0, len=0],
> lastCheckpointId=ea54267c-22c4-4b64-b328-87cc09d3d460]
> *[09:21:38,228][SEVERE][exchange-worker-#52][] Critical system error
> detected. Will be handled accordingly to configured handler [hnd=class
> o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=class o.a.i.i.pagemem.wal.StorageException:
> Restore wal pointer = null, while status.endPtr = FileWALPointer [idx=0,
> fileOff=0, len=0]. Can't restore memory - critical part of WAL archive is
> missing.]]*
> *class org.apache.ignite.internal.pagemem.wal.StorageException: Restore
> wal pointer = null, while status.endPtr = FileWALPointer [idx=0, fileOff=0,
> len=0]. Can't restore memory - critical part of WAL archive is missing.*
> *at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readCheckpointAndRestoreMemory(GridCacheDatabaseSharedManager.java:759)*
> *at
> 

Re: Updating the same entry multiple time in Ignite transaction

2018-06-05 Thread Andrey Mashenkov
Hi,

We'll check this case.
Please, share a reproducer if possible.

On Tue, Jun 5, 2018 at 3:53 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> In my case only one transaction is in progress. In this transaction I
> update few entries in cache and then re-read these entries and updates it
> again in some other method.
> When I read the entry after first update operations, transaction entry
> read version(IgniteTxEntry-entryReadVersion) changes. This version is
> validated in GridDhtTxPrepareFuture::checkReadConflict method.
>
> But I don't understand why transaction fails if the entry is modified
> after read operation in same transaction?
>
> Thanks,
> Prasad
>
> On Tue, Jun 5, 2018 at 6:07 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> It is ok, Optimistic transaction failed on commit
>> with TransactionOptimisticException due to write\read conflict.
>> See javadoc [1] and documentation [2] for details.
>>
>> Javadoc says:
>> *  However, in {@link TransactionConcurrency#OPTIMISTIC} mode, if some
>> transactions cannot be serially isolated
>>  *  from each other, then one winner will be picked and the other
>> transactions in conflict will result in
>>  * {@link TransactionOptimisticException} being thrown.
>>
>> [1] https://github.com/apache/ignite/blob/master/modules/cor
>> e/src/main/java/org/apache/ignite/transactions/Transaction.java
>> [2] https://apacheignite.readme.io/docs/transactions#section
>> -optimistic-transactions
>>
>> On Tue, Jun 5, 2018 at 3:19 PM, Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>> I tried this with simple code and it works fine.
>>> But in my application when I do similar thing I get following exception
>>> when I do commit. I am getting transaction as follows.
>>>
>>> IgniteTransactions igniteTx = getServiceContext().getGridSer
>>> vice().getTransaction();
>>> try (Transaction transaction = igniteTx
>>> .txStart(TransactionConcurrency.OPTIMISTIC,
>>> TransactionIsolation.SERIALIZABLE)) {
>>>
>>> Can you please advise?
>>>
>>> Caused by: org.apache.ignite.transactions.TransactionOptimisticException:
>>> Failed to prepare transaction, read/write conflict
>>> [key=DefaultDataAffinityKey{id=1556524, affinityId=1},
>>> keyCls=com.qls.agms.grid.data.key.DefaultDataAffinityKey, val=null,
>>> cache=IPV4_ASSET_GROUP_DETAIL_CACHE, thread=IgniteThread
>>> [compositeRwLockIdx=7, stripe=-1, plc=0, name=pub-#53%springDataNode%]]
>>> at org.apache.ignite.internal.uti
>>> l.IgniteUtils$14.apply(IgniteUtils.java:895)
>>> at org.apache.ignite.internal.uti
>>> l.IgniteUtils$14.apply(IgniteUtils.java:893)
>>> at org.apache.ignite.internal.uti
>>> l.IgniteUtils.convertException(IgniteUtils.java:975)
>>> at org.apache.ignite.internal.pro
>>> cessors.cache.transactions.TransactionProxyImpl.commit(Trans
>>> actionProxyImpl.java:296)
>>> at com.qls.agms.task.ignite.EditA
>>> ssetGroupIgniteTask.run(EditAssetGroupIgniteTask.java:44)
>>> at org.apache.ignite.internal.pro
>>> cessors.closure.GridClosureProcessor$C4.execute(GridClosureP
>>> rocessor.java:1944)
>>> at org.apache.ignite.internal.processors.job.GridJobWorker$
>>> 2.call(GridJobWorker.java:566)
>>> at org.apache.ignite.internal.uti
>>> l.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6623)
>>> at org.apache.ignite.internal.pro
>>> cessors.job.GridJobWorker.execute0(GridJobWorker.java:560)
>>> ... 5 common frames omitted
>>> Caused by: 
>>> org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException:
>>> Failed to prepare transaction, read/write conflict
>>> [key=DefaultDataAffinityKey{id=1556524, affinityId=1},
>>> keyCls=com.qls.agms.grid.data.key.DefaultDataAffinityKey, val=null,
>>> cache=IPV4_ASSET_GROUP_DETAIL_CACHE, thread=IgniteThread
>>> [compositeRwLockIdx=7, stripe=-1, plc=0, name=pub-#53%springDataNode%]]
>>> at org.apache.ignite.internal.pro
>>> cessors.cache.distributed.dht.GridDhtTxPrepareFuture.version
>>> CheckError(GridDhtTxPrepareFuture.java:1190)
>>> at org.apache.ignite.internal.pro
>>> cessors.cache.distributed.dht.GridDhtTxPrepareFuture.checkRe
>>> adConflict(GridDhtTxPrepareFuture.ja

Re: Ignite Optimistic Transactions

2018-06-05 Thread Andrey Mashenkov
Hi,

1. The explanation looks confusing. In other words, Ignite remembers entry
version in transaction context at first read access.

2. If another transaction has commited just before the current transaction,
then current transaction will fails with TransactionOptimisticException
as Ignite can't resolve the conflict automatically to preserve SERIALIZABLE
guarantees.
Javadoc says, it is up to user to handle this exception and retry the
transaction.

3. Yes, it is ok.

4. Get request shouldn't change entry version and Ignite shouldn't fail the
transaction in this case.



On Tue, Jun 5, 2018 at 2:49 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> https://apacheignite.readme.io/v2.5/docs/transactions#section-optimistic-
> transactions
>
> As per above link:
>
> *SERIALIZABLE - Stores an entry version upon first read access. Ignite
> will fail a transaction at the commit stage if the Ignite engine detects
> that at least one of the entries used as part of the initiated transaction
> has been modified*
>
> I am little confused with above statement.
>
> When it says it stores an entry version upon first read access, does it
> mean that read access happened with in the same transaction only?
>
> Why does ignite fails the transaction in this scenario?
>
> The data I am working on is collocated on same node. I am using Optimistic
> concurrency modes and isolation level is Serializable. Is this correct?
>
> While transaction is in progress and some other thread (request) outside
> this transaction makes a get request for the data being modified, does
> ignite fails the transaction in this case too?
>
>
>
>
> Thanks,
> Prasad
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How to execute an sql on a specific remote server?

2018-06-05 Thread Andrey Mashenkov
Hi,

What are you trying to achieve?


What is cache mode? For REPLICATED cache, your query will be sent to and
executed on one random node.

1. You can sent a compute job [1] to certain node (via
ignite.compute(ignite.cluster().forNode(...)).execute(task))
or affinity job [2] (ignite.compute().affinityRun())
which will execute local SQL query [3].

See compute docs [1] and SQL local flag docs [2].

2. If cache is PARTITIONED, then you may want to send a query to node with
certain partition.
Then you can try to set partitions explicitly via
SqlFielQuery.setPartitions().


[1]
https://apacheignite.readme.io/docs/distributed-closures#call-and-run-methods
[2]
https://apacheignite.readme.io/docs/collocate-compute-and-data#affinity-call-and-run-methods
[3] https://apacheignite-sql.readme.io/docs/local-queries



On Tue, Jun 5, 2018 at 3:35 PM, Shravya Nethula <
shravya.neth...@aline-consulting.com> wrote:

> Hi,
>
> I am trying to execute the following select query using the native java
> client on a cluster of 2 nodes.
>
> String query = "select * from Person";
> List> results = superCache.query(new
> SqlFieldsQuery(query)).getAll();
>
> Now, is any there way to execute the same query on only one specific remote
> node?
>
> Can anyone please post some suggestions in this regard?
>
> Regards,
> Shravya Nethula.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Updating the same entry multiple time in Ignite transaction

2018-06-05 Thread Andrey Mashenkov
xPrepareFuture.prepare0(
> GridNearOptimisticSerializableTxPrepareFuture.java:315)
> at org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(
> GridNearOptimisticTxPrepareFutureAdapter.java:137)
> at org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearOptimisticTxPrepareFutureAdapter.prepare(
> GridNearOptimisticTxPrepareFutureAdapter.java:74)
> at org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.prepareNearTxLocal(GridNearTxLocal.java:3161)
> at org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:3221)
> at org.apache.ignite.internal.processors.cache.
> GridCacheAdapter.commitTxAsync(GridCacheAdapter.java:4019)
> at org.apache.ignite.internal.processors.cache.
> GridCacheSharedContext.commitTxAsync(GridCacheSharedContext.java:975)
> at org.apache.ignite.internal.processors.cache.transactions.
> TransactionProxyImpl.commit(TransactionProxyImpl.java:288)
> ... 10 common frames omitted
>
> Thanks,
> Prasad
>
> On Tue, Jun 5, 2018 at 2:18 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> Sure, multiple actions with same query within same transaction should
>> work.
>>
>> Please, let us know if you observe unexpected behavior.
>> Any reproducer will be appreciated.
>>
>>
>> On Tue, Jun 5, 2018 at 10:36 AM, Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Can I update the same entry multiple time inside ignite transaction?
>>> Or
>>> Can I update an entry and then remove the same entry in ignite
>>> transaction?
>>>
>>> Thanks,
>>> Prasad
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Updating the same entry multiple time in Ignite transaction

2018-06-05 Thread Andrey Mashenkov
Hi,

Sure, multiple actions with same query within same transaction should work.

Please, let us know if you observe unexpected behavior.
Any reproducer will be appreciated.


On Tue, Jun 5, 2018 at 10:36 AM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Hi,
>
> Can I update the same entry multiple time inside ignite transaction?
> Or
> Can I update an entry and then remove the same entry in ignite transaction?
>
> Thanks,
> Prasad
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How to use Binarylizable interface or Externalizable on my custom Key?

2018-06-04 Thread Andrey Mashenkov
Hi,

You are free to use any hashcode and equals in your classes.
Ignite will convert your POJO to BinaryObject before save to cache and use
it's own hashing anyway.

It still unclear what are trying to do.
Key is unique for each cache entry. So, it is impossible to get entry by a
part of key.
You should use SQL API for this instead.


On Mon, Jun 4, 2018 at 12:31 PM, the_palakkaran 
wrote:

> Hi Andrew,
>
> I need to somehow override the equals and hashcode method in my key class.
>
> ie;
>
> I have a CustomerKey and CustomerModel configured in a customerCache. I
> need
> to get results from the cache based on the key class that I am passing to
> the cache for query.
>
> like when I pass a key class to the cache that has an equals method in
> which
> I have something like customerNumber == obj.CustomerNo, I need to get
> results based on it. So basically during put to cache also, this equals
> method should have been executed.
>
> Is there any way I could achieve this using Binary Objects or Binarylizable
> or Externalizable ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How to use Binarylizable interface or Externalizable on my custom Key?

2018-06-04 Thread Andrey Mashenkov
Hi,

Would you please describe your scenario? Why you need to use binarylizable
or Externalizable?

Ignite allows overriding methods, but ignores object's hashcode and equals and
rely on its own implementation.
Ignite operates with BinaryObjects underneath and use BinaryObject hash
codes.
This allows to use BinaryObjects on server side and the requirement of
having user objects in classpath on server side is optional.

On Mon, Jun 4, 2018 at 9:38 AM, the_palakkaran  wrote:

> Hi,
>
> I understand that by default, ignite does not allow override hash code and
> equals method. I need it to handle my scenarios, so i came across
> Binarylizable interface.
>
> How to do this? Not so clear from documentation.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: IGNITE performance very very very less !!!

2018-05-29 Thread Andrey Mashenkov
Hi,

Why you think the performance is slow?
Ignite SQL has an H2 underneath and have some overhead as it is a distributed
system at first. So, local query latency can be slightly higher.

How do you measure query latency?
You may need to warm grid up to all components involved in query initialize.

What is a query plan?



On Tue, May 29, 2018 at 2:13 PM, the_palakkaran 
wrote:

> Hi,
>
> I have 200 caches loaded into a single data region. Among that I have a
> customerCache which has 300K records. It has many indexed fields, including
> customer number.
>
> When I query using *SQLFieldsQuery *on this particular cache (where
> condition has just customer number = ?), it takes around *280ms *to query a
> customer 
>
> This is very slow as compared to H2.
>
> Please provide some tips to improve the performance or let me know if I am
> missing something here.
>
> Ignite is started in embedded mode. Tried keeping data on heap and off
> heap,
> there is no difference in performance.
>
> cache configuration as below:
>
> dataRegionCfg.setInitialSize(8L * 1024 * 1024 * 1024);
> dataRegionCfg.setMaxSize(8L*1024*1024*1024);
> dataRegionCfg.setPersistenceEnabled(false);
> dataStorageCfg.setDataRegionConfigurations(dataRegionCfg);
> dataStorageCfg.setPageSize(4096);
> customerCacheConfig = new CacheConfiguration<>("customerCache");
> customerCacheConfig.setIndexedTypes(CustomerKey.class,
> Customer.class);
> customerCacheConfig.setReadThrough(false);
> customerCacheConfig.setAtomicityMode(ATOMIC);
> customerCacheConfig.setCacheMode(CacheMode.LOCAL);
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Cache not rebalanced after one node is restarted

2018-05-21 Thread Andrey Mashenkov
Hi,

It is possible, you put too few data that belongs to 2 partitions (total
partition is 1024 by default)
and after restart you got different partition distribution for 2 nodes.
Please take a look at how affinity function works [1].

[1]
https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function

On Sun, May 20, 2018 at 5:39 PM, Вадим Васюк  wrote:

> Hi All,
>
> I have a 2 server nodes (with persistence enabled) and one client node
> started on my PC.
> From client I activate cluster and then create a simple cache with below
> configuration and add 10 entries to it:
>
> CacheConfiguration cfg = new CacheConfiguration<>();
> cfg.setName(C);
> cfg.setBackups(1);
> cfg.setRebalanceDelay(1000L);
> cfg.setCacheMode(CacheMode.PARTITIONED);
> cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cfg.setRebalanceMode(CacheRebalanceMode.SYNC);
> IgniteCache cache = ignite.getOrCreateCache(cfg);
>
> IntStream.range(cache.size(CachePeekMode.ALL)+1, 
> cache.size(CachePeekMode.ALL)+1+10).forEach(i -> {
> cache.put(i, Utils.getRandonString(2));
> }
> );
>
> I have a simple computation task to check which entry went to which server
> and here is the output after I inserted data into the cache:
> server name: 544a56b3-1364-420e-bdbb-380a1460df72cache entries:
> 1,2,4,5,7,8backup entries: 3,6,9,10
> server name: eb630559-c6b4-46a4-a98b-3ba2abfefce9 cache entries:
> 3,6,9,10backup entries: 1,2,4,5,7,8
>
> As you can see all entries are saved and have backups on each other nodes.
>
> However after I restart one of these server nodes, I can see such data
> distribution:
> server name: eb630559-c6b4-46a4-a98b-3ba2abfefce9 cache entries:
> 1,2,3,4,5,6,7,8,9,10backup entries:
> server name: 544a56b3-1364-420e-bdbb-380a1460df72   cache entries:
>  backup entries: 1,2,3,4,5,6,7,8,9,10
>
> As you can see data after one node restart is no longer distributed nicely.
> And from this moment I cannot make it redistribute.
>
> Could you please advice what I may be doing wrong?
>
> Thanks for your reply.
>
>
> --
> Sincerely Yours
> Vadim Vasyuk
>



-- 
Best regards,
Andrey V. Mashenkov


Re: read through cache refresh

2018-05-17 Thread Andrey Mashenkov
Hi,

When your data in back store has been changed, you should clear cache to
force ignite forget outdated data.
Yes cache.clear() will be enough.
If you find this method slow and you have a large dataset, then you can try
to destroy and recreate a cache instead.


Cache will not blocked during cache.clear() operation.
You should use either external lock or try to recreate cache.

On Thu, May 17, 2018 at 6:47 AM, yonggu.lee 
wrote:

> I have an ignite cache with hbase read-through. Data is fully reconstructed
> on a daily basis on the underlying hbase table.
>
> My question is, when the hbase table is totally changed, calling
> cache.clear() is enough to get the latest data?
>
> And, during processing cache.clear(), all reads to the cache will be
> blocked? I mean, the cache is locked?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Node failure - Node out of topology (SEGMENTED)

2018-04-27 Thread Andrey Mashenkov
Hi,

Try to disable IPv6 on all nodes via JVM option -Djava.net.preferIPv4Stack=true
[1]
as using both IPv4 and IPv6 can cause grid segmentation.


[1]
https://stackoverflow.com/questions/11850655/how-can-i-disable-ipv6-stack-use-for-ipv4-ips-on-jre

On Fri, Apr 27, 2018 at 8:52 AM, naresh.goty  wrote:

> Hi,
>
> We are running apache ignite (v2.3) in embedded mode in a java based
> application with 9 node cluster in our production environment in AWS cloud
> infrastructure.
>
> Most of the time, we don't see any issue with node communication failure,
> but occasionally we find one of the node failure reporting the below error
> message.
>
> WARNING: Node is out of topology (probably, due to short-time network
> problems).
> Apr 16, 2018 5:19:24 AM org.apache.ignite.logger.java.JavaLogger warning
> WARNING: Local node SEGMENTED: TcpDiscoveryNode
> [id=13b6f3ec-a759-408f-9d3f-62f2381c649b, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.40.173.93, 127.0.0.1], sockAddrs=[/10.40.173.93:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=157,
> intOrder=83, lastExchangeTime=1523855964541, loc=true,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=false]
>
> Our analysis so far:
> 1) We are constantly monitoring the GC activities of the node, and can
> confirm that there is no long GC pauses occurred during the time frame of
> the node failure.
>
> 2) There is also no abnormal network spikes reported in AWS instance
> monitors as well.
>
> 3) CPU utilization on the affected node is low. No blocked threads reported
> from thread dumps.
>
> Attached Tomcat Logs of two nodes from the cluster of 9
> TomcatLogs_Node1: provided log details of Network Segmentation failure
> TomcatLogs_Node2: other node provided log info of discovery message
> ApplicationLogs_Node1: Detailed logs of Node stopping exceptions
> Two thread dumps
>
> Could some one provide any insights on how to trace the root cause of this
> issue and to prevent this issue from happening again?
>
> Thanks
> Naresh
>
>
> TomcatLog_Node1.txt
>  t1286/TomcatLog_Node1.txt>
> TomcatLog_Node2.txt
>  t1286/TomcatLog_Node2.txt>
> ApplicationLog_Node1.txt
>  t1286/ApplicationLog_Node1.txt>
> threaddump_1.threaddump_1
>  t1286/threaddump_1.threaddump_1>
> threaddump_2.threaddump_2
>  t1286/threaddump_2.threaddump_2>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Hibernate 5.2 support

2018-04-16 Thread Andrey Mashenkov
Hi,

As ticket still has no patch, most likely it will be postponed to the next
version.

On Mon, Apr 16, 2018 at 4:27 PM, kestas  wrote:

> I see there is ticket registered for Hibernate 5.2 support:
> https://issues.apache.org/jira/browse/IGNITE-5848
> It is planned for 2.5 release. And release page says
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.5 the
> 2.5
> version should come out on Apr 30, so it is only 2 weeks away, however,
> there is no activity on Hibernate integration ticket.   So should we expect
> Hibernate 5.2 support in upcoming Apr 30 release?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: NullPointerException in GridCacheTtlManager.expire

2018-04-12 Thread Andrey Mashenkov
Hi Dome,

It is known issue and there is a ticket for this [1] you can track.

It is hard to be reproduced due to a race.
Seems, GridCacheUtils.unwindEvicts() should check if context has been
started.


[1] https://issues.apache.org/jira/browse/IGNITE-7972


On Thu, Apr 12, 2018 at 2:39 PM, dkarachentsev 
wrote:

> Hi Dome,
>
> Could you please attach full logs?
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-04-05 Thread Andrey Mashenkov
Hi,

>So are you saying, query execution will not have any impact on the cluster
>activities like GET/PUTs in general ??
Why? Queries and get\put share same resources. So, they will affect each
other.

>I was thinking, if we can run these queries directly on backing store, it
>may not have any impact on RAM from where we always do GETs, will that
work?
When you use CacheStore, you have some external DB and you can run query on
DB bypassing Ignite. It is clear.

1. What "backing store" do you mean when Ignite persistence is used?
2. How are you imagine query will be run on "backing store" directly?
3.Why do you think it will help you as anyway disk pressure will be
increased? Moreover, disk is usually a bottleneck for PUT operations and
for GET in some cases (load from disk or TTL update).

On Tue, Apr 3, 2018 at 9:27 AM, Naveen  wrote:

> Hi ANdrew
>
> There were cases, when I just run select * from table on SQLLINE
> unknowingly, we could see queries getting slowed down and OOM errors. Our
> dev machines not very high end ones.
>
> When we deliver this solution, production support guys can run queries to
> debug any data related issues, during which they may need to join tables,
> for every adhoc query we cant create indexes to speedup the query execution
> right ??
>
> So are you saying, query execution will not have any impact on the cluster
> activities like GET/PUTs in general ??
>
> I was thinking, if we can run these queries directly on backing store, it
> may not have any impact on RAM from where we always do GETs, will that
> work?
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: How to insert multiple rows/data into Cache once

2018-04-02 Thread Andrey Mashenkov
1a. No. DataStreamer doesn't supports transactions.
1b. SQL doesn't support transactions and it is under active development for
now [1].

2. DataStreamer updates is not propagated to CacheStore by default, you can
force this with allowOverwrite option [2].
DataStreamer uses individual entry updates, but you can change this with
setting your own reciever.
DataStreamer sends update to primary and backups, and update cache
directly, so there is no need to start cluster-wide operation on each
update, that may costs.

4. Ignite has powerful invoke() method that allows you can implement your
own logic in EntryProcessor.


[1]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-3%3A+Transactional+SQL

[2]
https://apacheignite.readme.io/v1.0/docs/data-streamers#section-allow-overwrite


On Sat, Mar 31, 2018 at 9:03 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Can someone please reply?
>
> Thanks,
> Prasad
>
>
> On Sat, Mar 31, 2018, 9:02 AM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> I have similar requirement and I am using cache.putAll method to update
>> existing entries or to insert new ones.
>> I will be updating/inserting close to 3 million entries in one go.
>>
>> I am using wrte through approach to update/insert/delete the data in
>> oracle tables.
>> I am using cachestores writeAll/ deleteAll method to achieve this.
>>
>> I am doing this in single  ignite distributed transaction.
>>
>>
>> Now the question is,
>> 1a) Can I use streamer in ignite transaction?
>> 1b) Can I use ignite jdbc bulk update, insert, delete with ignite
>> distributed transaction?
>>
>> 2) if I use streamer will it invoke cache store writeAll method?
>> I meant does write through approach work with streamer.
>>
>>
>> 3) If I use Jdbc bulk mode for cache update and insert or delete, will it
>> invoke cache store's wrieAll and deleteAll method?
>> Does write through approach work with jdbc bulk update/insert/ delete?
>>
>>
>> 4) Does ignite have any apis on cache only for update purpose? Put/putAll
>> will insert or overwrite. What if I just want to update existing entries ?
>>
>>
>> Thanks,
>> Prasad
>>
>> On Fri, Mar 30, 2018, 11:12 PM Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Ignite has 2 JDBC drivers.
>>> 1. Client driver [1] starts client node (has all failover features) and
>>> you have to pass client node config in URL.
>>> 2. Thin Driver [2] that connect directly to one of Ignite server node.
>>>
>>> So, you need a first one to be able to use streaming mode.
>>>
>>> [1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver
>>> [2] https://apacheignite-sql.readme.io/docs/jdbc-driver
>>>
>>> On Fri, Mar 30, 2018 at 1:16 PM, <linr...@itri.org.tw> wrote:
>>>
>>>> Hi Andrey,
>>>>
>>>>
>>>>
>>>> I am trying to run [2], as:
>>>>
>>>> // Register JDBC driver.
>>>>
>>>> Class.forName("org.apache.ignite.IgniteJdbcDriver");
>>>>
>>>> // Opening connection in the streaming mode.
>>>>
>>>> Connection conn = DriverManager.getConnection("
>>>> jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");
>>>>
>>>>
>>>>
>>>> However, I'm a bit confused about that setting in [2] about the
>>>> ignite-jdbc.xml.
>>>>
>>>>
>>>>
>>>> I do not know how to find or create the xml, and here I run the ignite
>>>> node via JVM.
>>>>
>>>>
>>>>
>>>> If I can write java code to produce the ignite-jdbc or not? Or only
>>>> complete Spring XML configuration?
>>>>
>>>>
>>>>
>>>> By the way, I have tried the [1], that worked well.
>>>>
>>>>
>>>>
>>>> Finally, I still need to use the SQL as a client node, and quick write
>>>> data into cache.
>>>>
>>>>
>>>>
>>>> Thank you for helping me
>>>>
>>>>
>>>>
>>>> Rick
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
>>>> *Sent:* Thursday, March 29, 2018 6:20 PM
>>>> *To:* user@ignite.apache.org
>>>> *Subject:* Re:

Re: Using 3rd party DB together with native persistence (WAS: GettingInvalid state exception when Persistance is enabled.)

2018-04-02 Thread Andrey Mashenkov
Hi

It is very easy for user to shoot himself in a foot... and they do it again
and again e.g. like this one [1].

[1]
http://apache-ignite-users.70518.x6.nabble.com/Data-Loss-while-upgrading-custom-jar-from-old-jar-in-server-and-client-nodes-td20505.html


On Thu, Mar 8, 2018 at 11:28 PM, Dmitriy Setrakyan 
wrote:

> To my knowledge, the 2.4 release should have support for both persistence
> mechanisms, native and 3rd party, working together. The release is out for
> a vote already:
> http://apache-ignite-developers.2346864.n4.nabble.
> com/VOTE-Apache-Ignite-2-4-0-RC1-td27687.html
>
> D.
>
> On Mon, Feb 26, 2018 at 2:43 AM, Humphrey  wrote:
>
>> I think he means when *write-through* and *read-through* modes are
>> enabled on
>> the 3rd party store, data might be written/read to/from one of those
>> persistence storage (not on both).
>>
>> So if you save data "A" it might be stored in the 3rd party persistence,
>> and
>> not in the native. When data "A" is not in the cache it might try to look
>> it
>> up from the native persistence, where it's not available. Same could
>> happen
>> with updates, if "A" was updated to "B" it could have changed in the 3rd
>> party but when requesting for the data again you might in one case get "A"
>> an other case "B" depending on the stores it reads the data from.
>>
>> At least that is what I understand from his consistency between both
>> stores.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Any idea what this is ?

2018-04-02 Thread Andrey Mashenkov
ls.java:9908)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> copyAndInject(GridServiceProcessor.java:1422)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.
> GridServiceProcessor.redeploy(GridServiceProcessor.java:1343)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> processAssignment(GridServiceProcessor.java:1932)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> onSystemCacheUpdated(GridServiceProcessor.java:1595)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.
> GridServiceProcessor.access$300(GridServiceProcessor.java:124)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor$
> ServiceEntriesListener$1.run0(GridServiceProcessor.java:1577)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor$
> DepRunnable.run(GridServiceProcessor.java:2008)
> [ignite-core-2.4.0.jar:2.4.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:1.8.0_144]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:1.8.0_144]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
> Caused by: org.apache.ignite.binary.BinaryObjectException: Cannot find
> schema for object with compact footer [typeId=-389806882,
> schemaId=1942057561]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.
> getOrCreateSchema(BinaryReaderExImpl.java:2020)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:284) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:183) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:162) ~[ignite-core-2.4.0.jar:2.4.0]
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryMarshaller.
> unmarshal0(BinaryMarshaller.java:99) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshalle
> r.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9902)
> [ignite-core-2.4.0.jar:2.4.0]
> ... 10 more
>
>
> Den 2018-03-27 kl. 14:57, skrev Andrey Mashenkov:
>
> Hi,
>
> Yes, Ignite stores configuration on disk if native persistence enabled.
> Would you please share a reproducer?
>
>
> On Tue, Mar 27, 2018 at 3:04 PM, Mikael <mikael-arons...@telia.com> wrote:
>
>> Ok, I will have a look at it and see what I can figure out, it's only a
>> test computer running so it is only a single node.
>>
>> One question though, does Ignite save information about the services that
>> was running on disk when a node is stopped ? it looks like that otherwise
>> it would not know about the services that was running before, and does this
>> always happen or is it only when native persistence is enabled ?
>> There is no IGNITE_HOME path set so it uses the temp\ignite directory I
>> assume, this directory had the date of today.
>>
>> I deleted it and tried again and had the same thing.
>>
>> Mikael
>>
>>
>> Den 2018-03-27 kl. 12:43, skrev Andrey Mashenkov:
>>
>> Mikael,
>>
>> Please, let us know if the issue occurs again and no work directories
>> were deleted and no files can be shared between nodes.
>> We'll investigate this. Any reproducer will be appreciated.
>>
>> On Tue, Mar 27, 2018 at 1:40 PM, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi Mikael,
>>>
>>> Please check if ignite work directories were not  cleaned in between.
>>> Also check if every node have separate work directory and no files can
>>> be shared.
>>>
>>> Otherwise, it looks like a race.
>>> As a workaround you can specify Classes that can be serialized (Service
>>> classes, Key\Value classes)
>>> in BinaryConfiguration.setClassNames() to force Ignite register classes
>>> at startup.
>>>
>>>
>>>
>>> On Tue, Mar 27, 2018 at 1:25 PM, Mikael <mikael-arons...@telia.com>
>>

Re: Performance of Ignite integrating with PostgreSQL

2018-04-02 Thread Andrey Mashenkov
Hi,

1. Have you tried to test your disk write speed? it is possible it was
formatted with no align respect.
2. Have you tried to check if there is any Postgres issues? E.g. via
querying postgres directly. Do you see higher disk pressure?
3. Is it possible you generate data too slowly? Have you tries to run a
multi-threaded test?



On Tue, Mar 27, 2018 at 12:44 PM,  wrote:

> Hi Vinokurov,
>
>
>
> I tried to run your code for 30 minutes monitored by “atop”.
>
> And the average write speed is about 2151.55 KB per second.
>
> Though the performance is better.
>
> But there is still a gap with your testing result.
>
> Is there anything I can improve?
>
> Thanks.
>
>
>
> There is my hardware specifications.
>
> CPU:
>
>   Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>
>   4 cores
>
> Memory:
>
>   16 GB
>
>
>
> Atop observations:
>
> disk   busy   read/s KB/read  writ/s   KB/writ  avque  avserv
> _dsk_
>
> sda89%29.714.8  116.318.5  13.1   6.13 ms
>
>
>
>
>
> Print out parts of time per putAll:
>
> 221ms
>
> 23ms
>
> 22ms
>
> 60ms
>
> 56ms
>
> 71ms
>
> 140ms
>
> 105ms
>
> 117ms
>
> 69ms
>
> 91ms
>
> 89ms
>
> 32ms
>
> 271ms
>
> 24ms
>
> 23ms
>
> 55ms
>
> 90ms
>
> 69ms
>
> 1987ms
>
> 337ms
>
> 316ms
>
> 322ms
>
> 339ms
>
> 101ms
>
> 170ms
>
> 22ms
>
> 41ms
>
> 43ms
>
> 110ms
>
> 668ms
>
> 29ms
>
> 27ms
>
> 28ms
>
> 24ms
>
> 22ms
>
>
>
>
>
> *From:* Pavel Vinokurov [mailto:vinokurov.pa...@gmail.com
> ]
> *Sent:* Thursday, March 22, 2018 11:07 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> In your example you add the same key/values into cache, so it's just
> overwrites entries and persists only 100 entries.
>
> Please look at the project https://bitbucket.org/vinokurov-pavel/ignite-
> postgres . I have ~70-100 Mb/s on my SSD.
>
>
>
> 2018-03-22 11:55 GMT+03:00 :
>
> Hi Vinokurov,
>
>
>
> I changed my code
>
> >> IgniteCache igniteCache = 
> >> ignite.getOrCreateCache("testCache
> ");
>
> To
>
> IgniteCache igniteCache = ignite.cache("testCache");
>
> And update to 2.4.0 version.
>
>
>
> But the writing speed is still about 100 KB per second.
>
>
>
>
>
> Below is jdbc connection initialization:
>
> @Autowired
>
> public NamedParameterJdbcTemplate jdbcTemplate;
>
> @Override
>
> public void start() throws IgniteException {
>
> ConfigurableApplicationContext context = new ClassPathXmlApplicationContext
> ("postgres-context.xml");
>
> this.jdbcTemplate = context.getBean(NamedParameterJdbcTemplate.class);
>
> }
>
>
>
>
>
> The PostgreSQL configuration, “postgres-context.xml” :
>
> 
>
> http://www.springframework.org/schema/beans;
>
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>
>xmlns:context="http://www.springframework.org/schema/context;
>
>xsi:schemaLocation="
>
>http://www.springframework.org/schema/beans
>
>http://www.springframework.org/schema/beans/spring-beans.xsd
>
>http://www.springframework.org/schema/context
>
>http://www.springframework.org/schema/context/spring-context.xsd;>
>
>
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>   class="org.springframework.jdbc.core.namedparam.
> NamedParameterJdbcTemplate">
>
> 
>
> 
>
> 
>
>
>
>
>
>
>
> Thanks.
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Thursday, March 22, 2018 1:50 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> Also it makes sense to use new 2.4 version.
>
>
>
> 2018-03-22 8:37 GMT+03:00 Vinokurov Pavel :
>
> >> IgniteCache igniteCache = 
> >> ignite.getOrCreateCache("testCache
> ");
>
> please, change to  ignite.cache("testCache") to be sure the we use
> configuration from the file.
>
>
>
> 2018-03-22 8:19 GMT+03:00 Vinokurov Pavel :
>
> You already showed the cache configuration, but could you show jdbc
> connection initialization
>
>
>
> 2018-03-22 7:59 GMT+03:00 Vinokurov Pavel :
>
> Hi,
>
>
>
> Could you please show the "PATH/example-cache.xml" file.
>
>
>
> 2018-03-21 9:40 GMT+03:00 :
>
> Hi Vinokurov,
>
>
>
> Thanks for your reply.
>
> I try to write batches by 100 entries.
>
> And I got a worse result.
>
> The writing speed is down to 12.09 KB per second.
>
> Below is my code which I try to use putAll and writeAll to rewrite.
>
> Did I make some mistakes?
>
>
>
>
>
>
>
> Main function:
>
> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>
> IgniteCache igniteCache = 
> ignite.getOrCreateCache("testCache
> ");
>
> for(int i = 0; i < 100; i++)
>
> {
>
>  parameterMap.put(Integer.toString(i), 

Re: Spark 'close' API call hangs within ignite service grid

2018-04-02 Thread Andrey Mashenkov
Hi,

Socket exception can be caused by wrong network configuration or firewall
configuration.
If node will not be able to send an response to other node then it can
cause grid hanging.

(Un)marshall exceptions (that is not caused by network exceptions) is a
signal that smth goes wrong.


On Fri, Mar 23, 2018 at 8:48 AM, akshaym 
wrote:

> I have pushed the sample application to  github
>   . Please check
> it
> once.
>
> Also, I am able to get rid of the hang issue with spark.close API call by
> adding "igniteInstanceName" property. Not sure if its a right approach
> though.
> I came up with this solution, while debugging this issue. What I observed
> is
> that during saving dataframe to ignite, it needs Ignite context. It first
> checks if the context is already there, if it is exists it uses that
> context
> to save dataframe in ignite and on spark close API call it tries to close
> the same context.
> As I am trying to run this spark job as an ignite service, I wanted it to
> run continuously. So closing the ignite context was causing this issue. So
> to make dataframe APIs to create new context everytime, I added
> "igniteInstanceName" property to config which I am apsing to new ignite DF
> APIs.
>
> Though it resolves the hang issue it is still showing some socket
> connection
> and unmarshalling exceptions. Do I need to worry about it? How can I get
> rid
> of those?
>
> Also, Any trade-offs if we use Spark As Ignite Service when executed with
> Yarn?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-04-02 Thread Andrey Mashenkov
Hi,

>In this case also, since we dont have eviction in place, all the time data
>is retrieved from RAM only, the only time request goes to Oracle is for
>Upserts and delete.
Here, all queries run on data in RAM only. Ignite just propagate updates to
back store (to Oracle via CacheStore impl)
with keeping consistency guaranties according to CacheStore configuration.

>So if oracle DB is loaded heavily while running these queries also, it does
>not affect the cluster performance when it comes to data retrievals, am i
>right ??
Not quite, CacheStore affects performance due to either CacheStore updates
are synchronous or CacheStore buffer is limited.

With Ignite persistence you are not limited with RAM. PageMemory concept
allow you to query data that resides on disk.
What OOM do you mean Ignite or JVM, and on what side: client or server?
Why you think with same dataset OOM with persistence has higher probability
than with no persistence and no eviction?


On Mon, Apr 2, 2018 at 9:09 AM, Naveen  wrote:

> HI
>
> Let me rephrase my question, guess I have conveyed my question correctly.
>
> Lets take an example
>
> Ignite cluster with backing store as RDBMS - oracle and no eviction in
> place.
>
> As we all know, we can run complex queries on Oracle to retrieve desired
> data.
> In this case also, since we dont have eviction in place, all the time data
> is retrieved from RAM only, the only time request goes to Oracle is for
> Upserts and delete.
> So if oracle DB is loaded heavily while running these queries also, it does
> not affect the cluster performance when it comes to data retrievals, am i
> right ??
> Inserts/Update/Deletes on DB may get slower if DB is loaded with these
> heavy
> queries.
>
> Similar way, how can we achieve this with ignite cluster with native
> persistence ?
> When native persistence is used, if we run some heavy queries on cluster
> thru SQLLINE, it may give out of memory error and node might crash also.
> How
> can we avoid this, it should run on backing store, I am fine if thge query
> exection takes longer time, but cluster should not get crashed.
>
> Hope I conveyed my requirements more clear this time.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Client Heap out of Memory issue

2018-04-02 Thread Andrey Mashenkov
Hi Shawn,

The OOM error occurs on remote server node, there were not sufficient
memory to process request,
but other threads were not affected by this.
Looks like, Ignite was able to recover from the error as it was suppressed
and  reply to client has been sended.


On Mon, Apr 2, 2018 at 8:22 AM, shawn.du <shawn...@neulion.com.cn> wrote:

> Hi Andrey,
>
> Thanks for your replay. It still confused me for:
> 1) for storm worker process, If it is  because of OOM and crashed. it
> should dump the heap. for I set  -XX:+HeapDumpOnOutOfMemoryError
> but it didn't.  For storm worker, it behaves like a normal fatal error
> which make storm worker restart.
> 2) It did make ignite server heap dump. for I analyzed the dump, it is
> ignite server's. (seeing from PID and java command etc.)
>
> anyone can explain? Thanks.
>
> Thanks
> Shawn
>
> On 3/31/2018 01:36,Andrey Mashenkov<andrey.mashen...@gmail.com>
> <andrey.mashen...@gmail.com> wrote:
>
> Hi Shawn,
>
> 1. Ignite use off heap to store cache entries. Client store no cache data.
> Cache in LOCAL mode can be used on client side, and it should use offheap
> of course.
>
> All data client retreive from server will be in offheap
>
> 2. It is not IgniteOutOfMemory error, but JVM OOM.
> So, try to investigate if there is a memory leak in your code.
>
> On Fri, Mar 30, 2018 at 6:36 AM, shawn.du <shawn...@neulion.com.cn> wrote:
>
>> Hi,
>>
>> My Ignite client heap OOM yesterday.  This is the first time we encounter
>> this issue.
>>
>> My ignite client colocates within Storm worker process. this issue cause
>> storm worker restart.
>> I have several questions about it: our ignite version is 2.3.0
>> 1) if ignite in client mode, it use offheap? how to set the max
>> onheap/offheap memory to use.
>> 2) our storm worker have 8G memory, ignite client print OOM, it doesn't
>> trigger storm worker to dump the heap.
>> but we get a ignite server's heap dump.  ignite server didn't die.
>> The ignite server's heap dump is very small. only have 200M.
>> which process is OOM? worker or ignite server?
>>
>> This is logs:  Thanks in advance.
>>
>>   Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update
>> keys on primary node.
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridNearAtomicUpdateRes
>> ponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.upda
>> teAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.upda
>> teAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.proc
>> essNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.acce
>> ss$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.
>> apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.
>> apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cac

Re: Distributed lock

2018-04-02 Thread Andrey Mashenkov
Hi,

If lock will be resides on node that hold the lock, how newly joined node
will discover just locked lock instance by lock name to add itself in
waiting queue?
Who will be responsible for waiting queue handing? Does the queue should be
transferred when owner has changed?
And finally, how this will resolve the issue with lock-owner node failure?
Who will be next owner?


On Mon, Apr 2, 2018 at 10:54 AM, Green <15151803...@163.com> wrote:

> Hi,Roman
>   Thank you for the reply.
>   I think i should change the value of cacheMode to replicated, it is more
> safe.
>   why not cache the lock on the node who own the lock? If the node leaves
> topology,  it will has no effect on other nodes.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Slow data load in ignite from S3

2018-04-02 Thread Andrey Mashenkov
Hi Rahul,

Possibly, mostly a new data is loaded to Ignite.
I meant, Ignite allocate new pages, rather than update ones.

In that case, you may not get benefit from increasing checkpoint region
size. It will just deffer a checkpoint.

Also, you can try to move WAL and ignite store to different disks and to
set region initial size to reduce or avoid region extents allocation .

On Mon, Apr 2, 2018 at 9:59 AM, rahul aneja <rahulaneja...@gmail.com> wrote:

> Hi Andrey,
>
> Yes we are using SSD. Earlier we were using default checkpoint buffer 256
> MB , in order to reduce the frequency, we increased the buffer size , but
> it didn’t have any impact on performance
>
> On Fri, 30 Mar 2018 at 10:49 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> Possibly, storage is a bottleneck or checkpoint buffer is too large.
>> Do you use Provissioned IOPS SSD?
>>
>>
>> On Fri, Mar 30, 2018 at 3:32 PM, rahul aneja <rahulaneja...@gmail.com>
>> wrote:
>>
>>> Hi ,
>>>
>>> We are trying to load orc data (around 50 GB) on s3  from spark using
>>> dataframe API. It starts fast with good write throughput  and then after
>>> sometime throughput just drops and it gets stuck.
>>>
>>> We also tried changing multiple configurations , but no luck
>>> 1. enabling checkpoint write throttling
>>> 2. disabling throttling and increasing checkpoint buffer
>>>
>>>
>>> Please find below configuration and properties of the cluster
>>>
>>>
>>>1. 10 node cluster r4.4xl (EMR aws) and shared with spark
>>>2.  ignite is started with -Xms20g -Xmx30g
>>>3.  Cache mode is partitioned
>>>
>>>4. persistence is enabled
>>>5. DirectIO is enabled
>>>6. No backup
>>>
>>> 
>>>>> DataStorageConfiguration”>
>>>
>>>
>>>
>>>>> DataRegionConfiguration”>
>>>
>>>>>value=“#{20L * 1024 * 1024 * 1024}“/>
>>>
>>>>> 1024 * 1024}“/>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks in advance,
>>>
>>> Rahul Aneja
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: How to insert multiple rows/data into Cache once

2018-03-30 Thread Andrey Mashenkov
Hi,

Ignite has 2 JDBC drivers.
1. Client driver [1] starts client node (has all failover features) and you
have to pass client node config in URL.
2. Thin Driver [2] that connect directly to one of Ignite server node.

So, you need a first one to be able to use streaming mode.

[1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver
[2] https://apacheignite-sql.readme.io/docs/jdbc-driver

On Fri, Mar 30, 2018 at 1:16 PM, <linr...@itri.org.tw> wrote:

> Hi Andrey,
>
>
>
> I am trying to run [2], as:
>
> // Register JDBC driver.
>
> Class.forName("org.apache.ignite.IgniteJdbcDriver");
>
> // Opening connection in the streaming mode.
>
> Connection conn = DriverManager.getConnection("
> jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");
>
>
>
> However, I'm a bit confused about that setting in [2] about the
> ignite-jdbc.xml.
>
>
>
> I do not know how to find or create the xml, and here I run the ignite
> node via JVM.
>
>
>
> If I can write java code to produce the ignite-jdbc or not? Or only
> complete Spring XML configuration?
>
>
>
> By the way, I have tried the [1], that worked well.
>
>
>
> Finally, I still need to use the SQL as a client node, and quick write
> data into cache.
>
>
>
> Thank you for helping me
>
>
>
> Rick
>
>
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, March 29, 2018 6:20 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: How to insert multiple rows/data into Cache once
>
>
>
> Hi,
>
>
>
> Try to use DataStreamer for fast cache load [1].
>
> If you need to use SQL, you can try to use bulk mode updated via JDBC [2]
>
>
>
>
>
> Also a COPY SQL command [3] will be available in next 2.5 release.
>
> The feature is already in master, you can try to build from it. See
> example [4]
>
> .
>
>
>
> [1] https://apacheignite.readme.io/docs/data-streamers
>
> [2] https://apacheignite.readme.io/v2.0/docs/jdbc-
> driver#section-streaming-mode
>
> [3] https://issues.apache.org/jira/browse/IGNITE-6917
>
> [4] https://github.com/apache/ignite/blob/master/examples/
> src/main/java/org/apache/ignite/examples/sql/SqlJdbcCopyExample.java
>
>
>
> On Thu, Mar 29, 2018 at 11:30 AM, <linr...@itri.org.tw> wrote:
>
> Dear all,
>
>
>
> I am trying to use the SqlFieldsQuery sdk to insert data to one cache on
> Ignite.
>
>
>
> I can insert one data into one cache at a time.
>
>
>
> However, I have no idea to insert multiple rows/data into the cache once.
>
>
>
> For example, I would like to insert 1000 rows/data into the cache once.
>
>
>
> Here, I provide my code to everyone to reproduce my situation.
>
> *public* *class* IgniteCreateServer {
>
> *public* *class* Person {
>
>  @QuerySqlField
>
>  *private* String firstName;
>
>   @QuerySqlField
>
>   *private* String lastName;
>
>   *public* Person(String firstName, String lastName) {
>
> *this*.firstName = firstName;
>
> *this*.lastName = lastName;
>
>   }
>
> }
>
> *public* *static* *void* main(String[] args) {
>
> cacheConf.setName("igniteCache");
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
> cacheConf.setCacheMode(CacheMode.*REPLICATED*);
>
> cacheConf.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
>   cfg.setCacheConfiguration(cacheConf);
>
>   Ignite igniteNode = Ignition.*getOrStart*(cfg);
>
>   *IgniteCache* cacheKeyvalue = *igniteNode**.getOrCreateCache(*
> *cacheConf**)*;
>
>
>
> *long* starttime, endtime;
>
> starttime = System.*currentTimeMillis*();
>
> *int* datasize = 10;
>
> *for* (*int* i = 0; i < datasize; i++) {
>
> cacheKeyvalue.put("key " + Integer.*toString*(i), Integer.*toString*(i
> ));
>
> }
>
>   endtime = System.*currentTimeMillis*();
>
>   System.*out*.println("write " + datasize + " pairkeyvalue data: spend "
> + (endtime - starttime)  + "milliseconds");
>
> //==
> ===
>
>
>
> cacheCfg.setName("personCache");
>
> cacheCfg.setIndexedTypes(String.*class*, Person.*class*);
>
> cacheCfg.setCacheMode(CacheMode.*REPLICATED*);
>
> cacheCfg.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
> *IgniteCache* cacheKeyTable = igniteNode.getOrCreateCache(cacheCfg);
>
>
>
> *long* starttime1, endtime1;
>
> starttime1 = System.*currentTimeMillis*();
>

Re: Ignite Client Heap out of Memory issue

2018-03-30 Thread Andrey Mashenkov
Hi Shawn,

1. Ignite use off heap to store cache entries. Client store no cache data.
Cache in LOCAL mode can be used on client side, and it should use offheap
of course.

All data client retreive from server will be in offheap

2. It is not IgniteOutOfMemory error, but JVM OOM.
So, try to investigate if there is a memory leak in your code.

On Fri, Mar 30, 2018 at 6:36 AM, shawn.du  wrote:

> Hi,
>
> My Ignite client heap OOM yesterday.  This is the first time we encounter
> this issue.
>
> My ignite client colocates within Storm worker process. this issue cause
> storm worker restart.
> I have several questions about it: our ignite version is 2.3.0
> 1) if ignite in client mode, it use offheap? how to set the max
> onheap/offheap memory to use.
> 2) our storm worker have 8G memory, ignite client print OOM, it doesn't
> trigger storm worker to dump the heap.
> but we get a ignite server's heap dump.  ignite server didn't die.
> The ignite server's heap dump is very small. only have 200M.
> which process is OOM? worker or ignite server?
>
> This is logs:  Thanks in advance.
>
>   Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update
> keys on primary node.
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.UpdateErrors.
> addFailedKeys(UpdateErrors.java:124) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.
> addFailedKeys(GridNearAtomicUpdateResponse.java:342) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache.
> updateAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache.
> updateAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache.
> processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache.
> access$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache$
> 5.apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.distributed.dht.atomic.GridDhtAtomicCache$
> 5.apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager.access$100(GridCacheIoManager.java:99)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.processors.cache.
> GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1555) ~[stormjar.jar:?]
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.processRegularMessage0(GridIoManager.java:1183)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$4200(GridIoManager.java:126) ~[stormjar.jar:?]
> at org.apache.ignite.internal.managers.communication.
> GridIoManager$9.run(GridIoManager.java:1090) ~[stormjar.jar:?]
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
> ~[stormjar.jar:?]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
> Suppressed: java.lang.OutOfMemoryError: Java heap space
> at org.apache.ignite.internal.processors.cache.
> IncompleteCacheObject.(IncompleteCacheObject.java:44)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cacheobject.IgniteCacheObjectProcessorImpl.toCacheObject(
> IgniteCacheObjectProcessorImpl.java:191) ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> processors.cache.persistence.CacheDataRowAdapter.readIncompleteValue(CacheDataRowAdapter.java:404)
> ~[stormjar.jar:?]
> at org.apache.ignite.internal.
> 

  1   2   3   4   5   6   >