Webinar >> How to Use Spark With Apache Ignite for Big Data Processing

2021-04-26 Thread aealexsandrov
Hi Igniters!

I'm going to talk about Apache Ignite Spark integration and run live demo
examples during the following webinar:

https://www.gridgain.com/resources/webinars/how-use-spark-apache-ignite-big-data-processing

If you'd like to learn more about the existing API, take a look at some use
cases, or just ask a few questions, feel free to join the webinar.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteCache.size() is hanging

2020-09-24 Thread aealexsandrov
Hi,

Can you please provide the full server logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Remote Cluster ServiceGrid debug

2020-09-23 Thread aealexsandrov
Hi,

There is too little information about your case. Which kind of debug you are
going to have? 

Ignite has several monitoring tools. For example, you can use the web
console:

https://apacheignite-tools.readme.io/v2.8/docs/ignite-web-console

Probably you can use it.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling swapPath causes invoking shutdown hook

2020-07-27 Thread aealexsandrov
Hi,

Documentation looks not very clean for me. Probably you should prepare your
operating system to use swap space. Can you try to prepare the swap file
first as following:

https://linuxize.com/post/how-to-add-swap-space-on-ubuntu-18-04/

And then choose this file. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Fwd: Exceptions in C++ Ignite Thin Client on process exit

2020-07-27 Thread aealexsandrov
Hi,

Are you sure that you don't have connectivity problems there? Is it possible
to share full logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node left from running cluster due to org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException: Runtime failure on search row

2020-07-27 Thread aealexsandrov
Hi,

Can you please clarify the version from which you upgraded? 

I know that it's possible that if you used a pretty old version then you
should also rebuild your indexes during the upgrade because inline size
calculation logic was changed in last releases.

It can be done by removing index.bin files. So the logic should be the
following:

1)Stop the Ignite nodes.
2)Remove index.bin files from
$IGNITE_HOME/db///index.bin
3)Start Ignite nodes one by one. For each node wait for the following
message:

[2020-05-29 07:55:04,984][INFO
][build-idx-runner-#61][GridCacheDatabaseSharedManager] Indexes rebuilding
completed for all caches.

I guess that it can help you with your upgrade. Please test it before
applying it to your production because the reason can be different.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Blocked system-critical thread has been detected

2020-07-27 Thread aealexsandrov
Hi,

Your log doesn't have the full thread dumps and I can't find some
information (e.g Topology Snapshots). However, I see that checkpoint thread
was blocked for a long time:

[02:45:50,849][SEVERE][tcp-disco-msg-worker-[3dac150e
10.20.4.18:47500]-#2][G] Blocked system-critical thread has been detected.
This can lead to cluster-wide undefined behaviour
[workerName=db-checkpoint-thread, threadName=db-checkpoint-thread-#54,
blockedFor=172s]

But I see that it blocked not longer then 3 minutes.

I guess that checkpoint lock can't be taken until some other operation will
not be timeout. It can be some network related timeout or some operation
timeout.

So please check your configuration and find where you have 3 min timeout and
check what is related to this timeout. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling swapPath causes invoking shutdown hook

2020-07-24 Thread aealexsandrov
Hi,

Can you please clarify your expectations? You expected that JVM process will
be killed instead of gracefully stopping? What you are going to achieve?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: A question regarding cluster groups

2020-07-24 Thread aealexsandrov
Hi,

1)No, at the moment there is no such implementation out of the box. You
should create your own implementation of data moving tool.

2)At the moment there is no such implementation too. However, I see that
there is the discussion about it on developer user list:

http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSSION-Hot-cache-backup-td41034.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Too much network latency issue

2020-07-24 Thread aealexsandrov
Hi,

I guess that some of your operations were slow because of load balancer
work. Two ways of affinity calculation exist:

1)If readFromBackup = false then the node with primary partition will be
chosen. Primary can be on different nodes.
2)If readFromBackup = true then the random node will be chosen because of a
load balancer.

Ignite doesn't have data center awareness functionality.

As a WA you can use computes that will be started on some required node.
This compute can use localPeek mode for cache operations:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#localPeek-K-org.apache.ignite.cache.CachePeekMode...-

For SQL you can use local queries:

https://apacheignite-sql.readme.io/docs/local-queries

BR,
Andei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: read-though tutorial for a big table

2020-03-10 Thread aealexsandrov
Hi,

You can read the documentation articles:

https://apacheignite.readme.io/docs/3rd-party-store

In case if you are going to load the cache from 3-rd party store (RDBMS)
then the default implementation of CacheJdbcPojoStore can take a lot of time
for loading the data because it used JDBC connection inside (not pull of
these connections).

Probably you should implement your own version of CacheStore that will read
data from RDBMS in several threads, e.g using the JDBC connection pull
there. Sources are open for you, so you can copy the existed implementation
and modify it:

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/cache/store/jdbc/CacheJdbcPojoStore.java

Otherwise, you can do the initial data loading using some streaming tools:

1)Spark integration with Ignite -
https://apacheignite-fs.readme.io/docs/ignite-data-frame
2)Kafka integration with Ignite -
https://apacheignite-mix.readme.io/docs/kafka-streamer

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Error [08006]: Failed to communicate with Ignite cluster <-- BinaryObjectException: Not enough data to read the value

2020-03-10 Thread aealexsandrov
Hi,

Can you please test the Apache Ignite 2.8. It should contain the following
fix:

https://issues.apache.org/jira/browse/IGNITE-11953

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to access IGFS file written one node from other node in cluster ??

2020-02-25 Thread aealexsandrov
As was mentioned above IGFS was deprecated and not recommended for use. 

Just use native HDFS API for working with files:

https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html

In case if you have some SQL over HDFS (Hive, Impala) then you can set up
the Ignite cache store.

https://apacheignite.readme.io/docs/3rd-party-store



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Real time sync ignite cache

2019-06-04 Thread aealexsandrov
Hi,

Without cache stores, you can try to use ignite-spark integration but it
will not provide the real-time synchronization. 

Also, you can try to set up some incremental data loading over JDBC, for
example via apach sqoop.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Python client: create cache with expiration policy

2019-06-03 Thread aealexsandrov
Yes, it's my fault. I double check the list of existed options and looks like
there is no expire policy option there.

I see that exists PROP_EAGER_TTL property:

https://apacheignite.readme.io/docs/expiry-policies#section-eager-ttl

But there are no options for expiration policies. 

Let me some time to check the source code.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AdaptiveLoadBalancingSpi not removing finished tasks from taskTops map

2019-06-03 Thread aealexsandrov
Hi,

It looks like an issue that possible can be a reason for OOM exception.

Is it possible to share the top consumers class list from the heap dump (it
can be done via memory analyzer e.g Eclipse MAT)?

Also, is it possible to provide the code and configuration that was used in
your load testing?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache expiry policy is slow

2019-06-03 Thread aealexsandrov
Hi,

Could you please share your reproducer of how you check that the expiration
of the entries?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Regarding IGNITE_HOME/work/db folder - Snapshot and recovery

2019-06-03 Thread aealexsandrov
Hi,

The error means that some of your data could be in inconsistency state
because of, for example, stopping of the node during the checkpoint process.

However, WAL and WAL archive can be required for cluster startup in some
cases because they contain the recovery information for cache data. You can
read it here:

https://apacheignite.readme.io/docs/write-ahead-log

I guess that you moved your data without moving of the WAL and/or WAL
archive. Note that in case if you don't set the paths to WAL and WAL_ARC
then Ignite will try to find them in IGNITE_HOME location.

As I see Ivan asked you to copy:

1){$IGNITE_HOME}\work\db
2){$IGNITE_HOME}\work\db\wal

But you said that you copied only work/db. Looks like without \work\db\wal
it can't be applied.

However, I suggest you to the backup the whole working directory that will
contain:

1){$IGNITE_HOME}\work\db
2){$IGNITE_HOME}\work\db\wal
3){$IGNITE_HOME}\work\binary_meta
4){$IGNITE_HOME}\work\marshaler

Last two will contain information about your classes.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Query on bean injection

2019-06-03 Thread aealexsandrov
Hi,

Looks like the related issue has already discussed here:

https://stackoverflow.com/questions/45269537/how-to-inject-spring-resources-into-apache-ignite-classes

The main idea is using the:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSpringBean.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Python client: create cache with expiration policy

2019-06-03 Thread aealexsandrov
Hi,

Yes, it should be possible. Please read the documentation from binaries:

https://github.com/apache/ignite/blob/master/modules/platforms/python/docs/datatypes/cache_props.rst

PROP_PARTITION_LOSS_POLICY:

Partition loss policy: READ_ONLY_SAFE=0, READ_ONLY_ALL=1, READ_WRITE_SAFE=2,
READ_WRITE_ALL=3, IGNORE=4

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Real time sync ignite cache

2019-05-31 Thread aealexsandrov
Hi,

What database do you use?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use transaction.commitAsync()?

2019-05-28 Thread aealexsandrov
Hi,

Yes, it should be fine.

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteFuture.html#listen-org.apache.ignite.lang.IgniteInClosure-

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Efficiency of key queries

2019-05-28 Thread aealexsandrov
Hi,

In the case of a single node, I don't think that you can speed up this
process. But you can scale your performance by adding new nodes.

It can be done using compute tasks and local SQL queries or local cache
operation. It means that you will be able to run part of your login on every
data node and send to the client only some results.

Read more about compute tasks you can here:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/compute/ComputeTask.html

The example you can see here:

https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/computegrid/ComputeTaskMapExample.java

Local SqlFieldQuery you can do inside compute task with next flag:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/query/SqlFieldsQuery.html#setLocal-boolean-

Do local cache operation you can via next methods:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#localPeek-K-org.apache.ignite.cache.CachePeekMode...-

BR,
Andrei







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Trouble with continuous queries

2019-05-02 Thread aealexsandrov
Hi,

The good example of how it can be done you can see here:

https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/query/ContinuousQueryExample.java

You can set remote listener to handle changes on remote nodes and local
listers for current.

Note that you will get the updates only until ContinuousQuery will not be
closed or until the node that starts it will not left the cluster.

Also, you can try to use CacheEvents like in example here:

https://apacheignite.readme.io/docs/events#section-remote-events

Note that events can affect your performance.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache#replace(K,V,V) implementation

2019-04-12 Thread aealexsandrov
Hi,

It's true that Ignite provides the implementation for JSR107:

https://ignite.apache.org/use-cases/caching/jcache-provider.html
https://apacheignite.readme.io/docs/jcache

Unfortunately, I don't have the details of how exactly it was implemented
and can't suggest you in your questions but Apache Ignite is the open source
and you can easily get the source code:

https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCacheAdapter.java
(method replace)

Also, you can try to find the answers on developer mail list: 

http://apache-ignite-developers.2346864.n4.nabble.com

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: waiting for partition map exchange questions

2019-04-12 Thread aealexsandrov
Hi,

Yes without logs it's not easy to understand what is the reason. Could you
please attach them?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Trying to understand the capacity calculator

2019-04-12 Thread aealexsandrov
Hi,

This calculator provides the approximated results but I will try to explain:

1)Does it mean that I need to give or take 3GB of physical RAM on the host
and set OFF-HEAP value to 3GB?

Not fully correct. Yes, you should have 3GB of RAM for OFF-HEAP. But also
you should provide some memory for Ignite application HEAP. The minimal
requirement for a single node is 512 MB. It provides the possibility to
start and works. Recommended configuration you can see here:

https://apacheignite.readme.io/docs/jvm-and-system-tuning

Also, your operating system and other application will use the memory. You
should take it into account.

2)And give or take 5GB of physical disk space and set the data region 5GB?

Correct in case if you will limit the WAL history. WAL history size could be
unlimited in some configurations. It means that the size of it can't be
calculated.

You can set the limit:

https://apacheignite.readme.io/docs/write-ahead-log#section-wal-archive

3)Now lets say I get 10 million and 1 objects would I get OOM?

In case of persistence no. Some of the entries will be replaced by a new one
in the memory but all of them will be stored on the disk. However, you can
get "no disk space on device" error is there is no disk space.

In case of disabled persistence the may see OOM in case if you didn't set
the eviction policies:

https://apacheignite.readme.io/docs/evictions

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use UPDATE statement and set value to NULL

2019-04-12 Thread aealexsandrov
Hi,

Here is a small example that works for me:

IgniteCache cache = grid(1).cache("person");

cache.put(1, new Person("sun", 100));
cache.put(2, new Person("moon", 50));

String sql =
"UPDATE person" +
"   SET name = ?" +
" WHERE age > 10;";

Object nullPtr = null; //it's important because just null will be
parsed as args[] = null instead of args[0] = null;

cache.query(new SqlFieldsQuery(sql).setArgs(nullPtr));

SqlFieldsQuery qry = new SqlFieldsQuery("select age from person
where name is null;");

FieldsQueryCursor> qryCursor = cache.query(qry);

Iterator> iterator = qryCursor.iterator();

while (iterator.hasNext()) {
List row = iterator.next();

System.out.println(">>> Age " + row.get(0));
}

Result:

>>> Age 100
>>> Age 50

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to automatic reset state of lost partitions?

2019-04-10 Thread aealexsandrov
Hi,

Yes, it's the correct solution.

Note that resetLostPartitions just returned the partitions to a normal
state. So it makes sense to call it when all nodes from baseline will be
online.

In case if you have 3 nodes with 1 backup for all caches and two nodes go
down and only one node back then better will be setting a new baseline for
that 2 nodes. The data should be rebalanced in this case.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: any known issue with oracle and spring-ignite-data

2019-04-05 Thread aealexsandrov
Hi,

Can you attach your cache configuration for your table that contains the
itemId and storeId fields?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Best way to upsetting/delete records in Ignite from Kafka Stream?

2019-03-19 Thread aealexsandrov
Hi,

Yes, looks like the KafkaStreamer doesn't support the DELETE behavior. It
was created to loading data to Ignite. 

However, deleting data from Ignite is possible using Interface
IgniteDataStreamer API:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html#removeData-K-

So you just require to proceed the Kafka stream and use IgniteDataStreamer
(or IgniteCache API) somewhere in sink function. Possible that you should
take a look at Kafka connector:

https://kafka.apache.org/documentation.html#quickstart_kafkaconnect

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use enums in sql ‘where’ clause when using Ignite Web Console

2019-03-15 Thread aealexsandrov
Hi,

Could you please share your CacheConfiguration where you set the EnumField
as a field in QueryEntity (just to have the same configuration)? I will try
to reproduce your issue and check how it works.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception while trying to insert data in Key/value map in ignite

2019-03-15 Thread aealexsandrov
Hi,

Could you please provide more details:

1)Server configuration
2)Cache configuration
3)Logs
4)The example of the data that you are going to insert

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The field name of the query result becomes uppercase

2019-03-15 Thread aealexsandrov
Hi,

No, it's not possible to do because all the fields in Ignite SQL should be
mapped to java fields. However, in java you can do next:

@QuerySqlField
private java.lang.String id;
@QuerySqlField
private java.lang.String Id;

here id and Id will be different fields. But in H2 SQL engine expects the
case insensitive fields. So one field in SQL could be mapped to two fields
in JAVA. You can see the details in this ticket
https://issues.apache.org/jira/browse/IGNITE-1338 (it closed as won't fix) 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: LIKE operator on Array column in Apache ignite SQL

2019-03-15 Thread aealexsandrov
Hi,

Ignite SQL engine supports only the next types:

https://apacheignite-sql.readme.io/docs/data-types

Also all existed functions you can see here:

https://apacheignite-sql.readme.io/docs/sql-reference-overview

So there is no way to work with arrays as DataTypes even if you set them as
type in QueryEntity.  Possible that you can add some new boolean SQL fields
to the object that contains the array:

public class Market{
@QuerySqlField(index = true)
private java.lang.String id;
@QuerySqlField
private java.lang.Boolean isContainLacl;
@QuerySqlField
private ArrayList array;

select * from market where isContainLacl = true;

You can fill this boolean flag when generating the Market object.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: DataRegionConfiguration is a FINAL class but prefer it not be

2019-03-15 Thread aealexsandrov
Hi,

I think it was done to avoid the possible issues that can happen because
users generally can't see the full logic from under the hood. 

If you going to suggest some improvement then better to create a thread on
apache ignite development mail list:

http://apache-ignite-developers.2346864.n4.nabble.com/

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Best way to upsetting/delete records in Ignite from Kafka Stream?

2019-03-15 Thread aealexsandrov
Hi,

You can use the Kafka streamer for these purposes:

https://apacheignite-mix.readme.io/docs/kafka-streamer

Also, take a look at this thread. It contains examples of how to work with
JSON files:

http://apache-ignite-users.70518.x6.nabble.com/Kindly-tell-me-where-to-find-these-jar-files-td12649.html

According to UPSERT specific. You should set the allowoverride flag to the
IgniteStreamer used by KafkaStreamer (otherwise it will be INSERT specific).
You can do it using the next methods:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/stream/StreamAdapter.html#getStreamer--
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/stream/StreamAdapter.html#setStreamer-org.apache.ignite.IgniteDataStreamer-

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Eviction policy is not working for default and new data region

2019-01-23 Thread aealexsandrov
Hi,

Could you please attach full XML configuration and java code that can
reproduce this issue? You can upload your reproducer to GitHub for example.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Inter node Communication security

2019-01-23 Thread aealexsandrov
Hi,

Ignite supports only simple password authentication from the box. It doesn't
contain any complex rights access.

You can read more about it here:

https://apacheignite.readme.io/docs/advanced-security

In case if you require for authorization options you can implement the
GridSecurityProcessor interface as part of a custom plugin or use some
existed plugins.  For example this one
  .

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Some of DataStreamer operations failed [failedCount=1]

2018-12-29 Thread aealexsandrov
Hi,

Could you please attach the full log file with:

Caused by: javax.cache.CacheException: class 
org.apache.ignite.IgniteCheckedException: Some of DataStreamer operations 
failed [failedCount=1] 
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
 
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.close(DataStreamerImpl.java:1287)
 
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.close(DataStreamerImpl.java:1388)
 
at com.XXX.datagrid.DataGridClient.writeAll(DataGridClient.java:190) 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: error in running shared rdd example with intellij

2018-12-29 Thread aealexsandrov
Hi,

Possible that you change something in the source code. I attached the simple
maven project with SharedRDDExample.

SharedRddExample.zip

  

Could you please try to run it?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to initialize wal (work directory contains incorrect number of segments) [cur=357, expected=10]

2018-12-28 Thread aealexsandrov
Hi,

I was able to reproduce this issue. The details you can see at the next
ticket:

https://issues.apache.org/jira/browse/IGNITE-10840

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to initialize wal (work directory contains incorrect number of segments) [cur=357, expected=10]

2018-12-27 Thread aealexsandrov
Thank you,

I will try to reproduce the same.

Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to initialize wal (work directory contains incorrect number of segments) [cur=357, expected=10]

2018-12-27 Thread aealexsandrov
No,

I just try to get the details and reproduce the issue. Possible there is an
issue here. I will return with an update soon.

Also could you please provide the configuration of your nodes? Did you
update the walArchivePath for all of your nodes?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to initialize wal (work directory contains incorrect number of segments) [cur=357, expected=10]

2018-12-27 Thread aealexsandrov
Hi,

What is the version of Ignite you use?

Did you set the walPath to the directory where before was walArchivePath?

BR,
Andei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: If I want to know how long it takes for data reading from the server side, where should I add logs?

2018-12-27 Thread aealexsandrov
Hi,

Will be better if you shared your code. However, I see that you used
persistence. So here are some reasons why you can see these drops:

1)You data region is less than full loaded data size stored on disk. In case
if read operation can't find the value in RAM then it will read it from the
disk. In the case of RANDOM_2_LRU it means that this value will be stored in
RAM and some other value will be evicted from the memory.

So when you will try to read this value you could see some performance drop.

2)Another problem could be related to initial loading of the values to
memory from the disk. I think that before starting of the benchmark you
should to warm up the Ignite by reading of the stored values for some time.

Could you please check can these moments be applicable to you?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Write Synchronization Mode per client

2018-12-26 Thread aealexsandrov
Hi,

Cache in Ignite is a cluster wide storage for data. This data will be
available around all clients. And it should work similarly for each of them.
That's why the configuration can't be changed during runtime on the already
started cache.

I don't think that all your clients will use the same data but have
different consistency requirements for it. So in your case, I suggest create
different caches for every goal/user with required configurations (every
cache could have different synchronization mode).

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: If I want to know how long it takes for data reading from the server side, where should I add logs?

2018-12-26 Thread aealexsandrov
Hi,

Can you share your benchmark and explain what results you are expecting? Are
you sure that there is no GC pauses at that moment?

Ignite already have its own benchmarks that used Yardstick:

https://apacheignite.readme.io/docs/perfomance-benchmarking

Are you use them? If not possible you can take a look.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Write Synchronization Mode per client

2018-12-26 Thread aealexsandrov
Hi,

Synchronization mode is a part of the cache configuration. It will be set up
at the cache starting. After that, the cache configuration can't be changed
in runtime.

The only case when cache configuration could be changed is when you destroy
old cache and create a new one with the new configuration.

Could you please provide the more details of how exactly your clients are
affected by cache synchronization mode?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Use of GridFutureAdapter$Node objects ?

2018-12-26 Thread aealexsandrov
Hi,

Looks like it's a part of the current implementation. I will try to
investigate it.

However, as a workaround you can use the next code to decrease the possible
memory allocation in case if you are going to load a lot of data:

int chunk_size = 1000;

try (IgniteDataStreamer streamer =
 ignite.dataStreamer("Cache1")) {
streamer.allowOverwrite(true);

Map entryMap = new HashMap<>();   

// your logic where you fill the map. As example:

for (int i = 0; i < chunk_size; i++)
entryMap.put(i, i);

futures.add(streamer.addData(entryMap));
}

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: diferrences between IgniteRdd and SparkRdd

2018-12-26 Thread aealexsandrov
Hi,

The main difference between native Spark RDD and IgniteRDD is that Ignite
RDD provides a shared in-memory view on data across different Spark jobs,
workers, or applications, while native Spark RDD cannot be seen by other
Spark jobs or applications.

You can see the attached image that describes it.

spark-ignite-rdd.png

  

According to your second question:

The provided code do next:

1)Create Spark context
2)Creates Ignite context with provided configuration.
3)Creates an Ignite Shared RDD of Type (Int,Int)
4)Fill the Ignite Shared RDD in with Int pairs.

To get the full example please take a look at ScalarSharedRDDExample.java
that located in source files:

https://github.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: error in importing ignite 2.7 to netbeans

2018-12-26 Thread aealexsandrov
Please check that you set the scale profile because of without it the spark
dependencies will not be added:

...


scala


src/main/spark




org.apache.ignite
ignite-scalar
2.7.0



org.apache.ignite
ignite-spark
2.7.0


...

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: error in importing ignite 2.7 to netbeans

2018-12-26 Thread aealexsandrov
Hi,

Depends on how exactly you start the SharedRDDExample?

1)In case if you use the new maven project then possible you forgot to add
dependencies for Ignite spark:


org.apache.ignite
ignite-core
2.7.0



org.apache.ignite
ignite-spring
2.7.0



org.apache.ignite
ignite-indexing
2.7.0



org.apache.ignite
ignite-spark
2.7.0


2)In case if you are going to build from the source then check that you set
the IGNITE_HOME.

If everything fine then provide the full build log error.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is it safe to decrease the data region size for a live cluster

2018-12-24 Thread aealexsandrov
Hi,

Yes, it's safe in case if rest (256-132) GB of RAM is enough for your system
working and other applications.

You have persistence, so your data will not be missed. Data region size sets
for every node and can be changed during node restart.

However, you should take an account to the next things:

1)In case if you are going to have the high availability of your data then
you could face a data loss issue during one of the nodes will be offline. To
avoid it you should add at least 1 backup for your caches or use the
replicated caches.

2)You should accurate calculate your memory usage to avoid the OOM issue
before increasing of the data region size. But it should be important when
you are going to increase it.

The best practice here is set the size of the data region to some value that
provides the possibility to store all persisted data in RAM. In this case,
you will see the best performance. When the data region size is small then
you will more often read from the disk and your performance will be lower.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about cpu usage and disk read speed on activation stage.

2018-12-20 Thread aealexsandrov
Hi,

I was wrong. You can activate the cluster with not full baseline topology.

Looks like you that your node that stopped during checkpoint and it is a
reason of long recovery of the binary memory that takes a lot of time.

Here is a ticket that should optimize this process:

https://issues.apache.org/jira/browse/IGNITE-10749

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about cpu usage and disk read speed on activation stage.

2018-12-20 Thread aealexsandrov
Hi,

I see from your log that you have two nodes in your baseline:

[23:36:58,866][INFO][main][GridDiscoveryManager]   ^-- Node
[id=B3234010-F5BE-49CA-A57F-6676622BFF48, clusterState=INACTIVE]
[23:36:58,867][INFO][main][GridDiscoveryManager]   ^-- Baseline [id=2,
size=2, online=1, offline=1]
[23:36:58,867][INFO][main][GridDiscoveryManager]   ^-- 1 nodes left for
auto-activation [2e3bc707-ac4c-4ba6-a0c2-1867c6ea8272]

node 1 with id = B3234010-F5BE-49CA-A57F-6676622BFF48 was started.
node 2 with id = 2e3bc707-ac4c-4ba6-a0c2-1867c6ea8272 was offline.

So cluster with persistence will be activated only when all nodes from
baseline will be online.

You can read about it here:

https://apacheignite.readme.io/docs/baseline-topology
https://apacheignite.readme.io/docs/baseline-topology#section-activating-the-cluster

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Register listeners before joining the cluster

2018-12-19 Thread aealexsandrov
Hi,

Yes, I understand. You also can contribute this new event to Ignite:

1)Create the Jira issue - https://issues.apache.org/jira
2)Prepare the patch in a separate branch named ignite-X (where X is
a number of JIRA ticket)
3)Create the pull request in GitHub to master branch -
https://github.com/apache/ignite
4)Use next tool - https://mtcga.gridgain.com/ -> inspect contribution ->
trigger build
5)When the test run will be done, check the report and if there is no issues
push comment Jira.
6)Create the new thread in the development user list
(http://apache-ignite-developers.2346864.n4.nabble.com/) and ask to review
your change and merge.

Or just file the feature request and wait until it will proceed. But it will
take some time.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scoped vs Singleton ignite client node in .net web api

2018-12-19 Thread aealexsandrov
Hi,

Generally, you should have a working Ignite cluster somewhere that will
store and process your requests.

I am not sure that you really require the client node in your case. It will
use additional memory and CPU.

You can use several ways to work with Ignite cluster without fat client
node:

1)Thin clients (ODBC/JDBC) - you may store the connection open but it could
be closed. So you can try to use some connection pull.

https://apacheignite.readme.io/docs/thin-clients

2)REST - just open the connection and do some operations. Also could be
timed out. So you could open a new connection for every update.

BR,
Andrei

https://apacheignite.readme.io/docs/rest-api



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteCheckedException when write and read made by different clients

2018-12-19 Thread aealexsandrov
Hi,

I don't think so. However, you can try to use Integer instead of Enum values
and proceeds it accordingly (It will require the changes in configuration)

What is the reason why you can't move the class to correct package?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about cpu usage and disk read speed on activation stage.

2018-12-19 Thread aealexsandrov
Hi,

Could you share you Ignite node configuration and logs from startup?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Register listeners before joining the cluster

2018-12-19 Thread aealexsandrov
Hi,

Depends on what you are going to archive:

In case if you are going to just collect some information then you can use:

https://apacheignite.readme.io/docs/ignite-life-cycle 

BEFORE_NODE_START

Invoked before Ignite node startup routine is initiated.

In case if you are going to make some desition about should a node be joined
or not that you can use the next method (in a separate plugin):

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/PluginProvider.html#validateNewNode-org.apache.ignite.cluster.ClusterNode-

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Swap Space configuration - Exception_Access_Violation

2018-11-28 Thread aealexsandrov
Hi,

Possible you face some of JDK issue. Could you please provide full error
message from the dump-core file or log like next:

EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x5e5521f4, pid=2980,
tid=0x3220 
# 
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build
1.8.0_131-b11) 
# Java VM: Java HotSpot(TM) Client VM (25.131-b11 mixed mode windows-x86 ) 
# Problematic frame: 
# C [MSVCR100.dll+0x121f4] 
# 
# Failed to write core dump. Minidumps are not enabled by default on client
versions of Windows 
# 
# If you would like to submit a bug report, please visit: 
# http://bugreport.java.com/bugreport/crash.jsp 
# The crash happened outside the Java Virtual Machine in native code. 
# See problematic frame for where to report the bug. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: MyBatis L2 newby

2018-11-28 Thread aealexsandrov
Hi,

I think that you could take a look at the next thread:

http://apache-ignite-users.70518.x6.nabble.com/MyBatis-Ignite-integration-release-td2949.html

This integration was tested with Ignite as I know:

https://apacheignite-mix.readme.io/docs/mybatis-l2-cache
http://www.mybatis.org/ignite-cache/

However, they have its own GitHub repository. I guess that you can find
examples there.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Swapspace - Getting OOM error ?maxSize more than RAM

2018-11-28 Thread aealexsandrov
Hi,

You faced that OOM issue because your cache was mapped to the default region
with max size 6.4 GB instead of 500MB_Region with max size 64GB.

I guess that you try to load more than 6.4 GB of data to this cache.

Generally, you should set the required data region for every cache in case
if it should not be mapped to the default region. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread aealexsandrov
Hi,

Could you please set -DIGNITE_QUIET=false to turn off the quiet mode and
re-attach the logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread aealexsandrov
Hi,

I don't think that it's possible to understand a reason without logs. Please
attach them here or possible if you can reproduce it then you may file a
Jira issue and attach logs there with a full description of the problem.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does it affect the put, get, query operation when add or remove a node?

2018-11-23 Thread aealexsandrov
Hi,

The general answer is yes because the rebalancing of data will be started
and you will not be able to process data until partition map exchange will
not be completed.

https://apacheignite.readme.io/docs/rebalancing
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood

However, in the case of persistence, you will have the baseline topology. In
this case, rebalancing will be started only in case of changing of the
topology (adding/removing data nodes to topology). Other node starts/stops
will no affect your operations.

https://apacheignite.readme.io/docs/baseline-topology

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to force ignite clear disk space

2018-11-22 Thread aealexsandrov
Hi,

I thought that it was fixed here:

https://issues.apache.org/jira/browse/IGNITE-8021

But possible you face another issue.

No 2.7 isn't released now but you can use nightly builds to check it or
build from the source code.

In case if you can reproduce it then you may file the new Jira ticket here
https://issues.apache.org/jira/projects/IGNITE/summary.

Don't forget to attach into ticket:

1)Code of reproducer
2)The configuration of the nodes
3)Logs from the nodes

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to force ignite clear disk space

2018-11-22 Thread aealexsandrov
Thank you for details.

I will check it in 2.6 release but I think that it should be fixed in 2.7.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-21 Thread aealexsandrov
Hi,

I think that information about it could be located in node log. Could you
please provide the log of the node where you tries to read the data?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to force ignite clear disk space

2018-11-21 Thread aealexsandrov
Hi,

I think that you face the issue related to the next ticket:

https://gridgain.freshdesk.com/helpdesk/tickets/9049

After destroying of the cache remains some files that contain the
configuration of the cache. But it should be already fixed in the latest
version of Ignite.

However, the files that contain the cache data should be removed. You should
check the size before destroying and after it. 

Another issue is WAL. It could grow without limit in case if you don't
configure the WAL history size or turn WAL logging off for some caches.

You can read about it here:

https://apacheignite.readme.io/docs/write-ahead-log

In case if you see partitions files (.bin) after destroying of the cache
then it could be an issue. What is your Ignite version?

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache stats like number of cache hits, misses, reads, write, avg

2018-11-21 Thread aealexsandrov
Hi,

DataStreamer depends on the allowOverwrite flag.

https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite

The default value is false to get the best performance. But it also means
that metrics will not be updated.

If you set allowOverwrite true then data streamer will work similar to usual
cache operation and metrics should work fine in this case.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: persistence when disk full

2018-11-12 Thread aealexsandrov
Hi,

The expected behavior is that the only node without space will be failed.
Could you please attach the logs from both of the nodes to get more details?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite issue during spark job execution

2018-11-12 Thread aealexsandrov
Hi,

Looks like your JVM faces long garbage collection pauses. You can configure
detailed GC logs to see how much time is spent in GC:
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats

Also, you can follow next guide and add more heap:

https://apacheignite.readme.io/docs/jvm-and-system-tuning#garbage-collection-tuning

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite 2.7 connection pool support?

2018-11-12 Thread aealexsandrov
Hi,

Ignite data source should be supported in 2.7 release. The use case you can
see in next test:

https://github.com/apache/ignite/blob/master/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java

However, you can try to ask Taras according to
https://issues.apache.org/jira/browse/IGNITE-6145 on development user list
http://apache-ignite-developers.2346864.n4.nabble.com.

For example here:

http://apache-ignite-developers.2346864.n4.nabble.com/jira-Created-IGNITE-6145-JDBC-thin-support-connection-pooling-td21177.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Status of Spark Structured Streaming Support IGNITE-9357

2018-11-12 Thread aealexsandrov
Hi,

Could you please re-create this issue on development user list:

http://apache-ignite-developers.2346864.n4.nabble.com/

Development user list is the better place to discuss new functionality.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Multiple directories/filesystems for storagePath, walPath, walArchivePath

2018-11-11 Thread aealexsandrov
Hi,

You can set the path to storagePath, walPath, walArchivePath in the node
configuration using:

https://apacheignite.readme.io/docs/durable-memory-tuning#section-separate-disk-device-for-wal

But looks like it's not possible to set multiple values for each of them.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client connects to server after too long time interval (1 minute)

2018-11-11 Thread aealexsandrov
Hi,

Could you please attach the XML configurations of your client and server
nodes and logs?

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite cannot access cache from listener after event “EVT_CACHE_STARTED” fired

2018-11-11 Thread aealexsandrov
Hi,

The cache could be started but possible cluster wasn't activated yet. Could
you please provide some information:

1)Did you see warnings or errors in the log?
2)Did you see this issue on normal cluster start or only on client restart?

Also, I see that you use it in the class named server config. Could you
please also provide the use case of this class?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Streamer without client node?

2018-11-11 Thread aealexsandrov
Hi,

You can use data streamers from the server node as well as from the client
as I know.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SB2 with Ignite 2.6 Sample Application

2018-11-11 Thread aealexsandrov
Hi,

In case if you have the example for integration with spring boot 2 then can
try to contribute it to Ignite apache examples or provide the link here.
Possible it will help other community members.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SB2 with Ignite 2.6 Sample Application

2018-11-08 Thread aealexsandrov
Hi,

I don't think that Ignite has the example for exactly spring boot
integration. But you can see the example for Spring integration.

https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/misc/springbean/SpringBeanExample.java

Do you have any problems with integration with spring boot?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what is Ignite Default atomicityMode

2018-11-08 Thread aealexsandrov
Hi,

Yes, default atomicity mode is ATOMIC. Next code used by Ignite to set the
atomicityMode

initializeConfigDefaults method used for default config. It contains:

if (cfg.getAtomicityMode() == null)
   
cfg.setAtomicityMode(CacheConfiguration.DFLT_CACHE_ATOMICITY_MODE);

So you can ignore this option in case if you need ATOMIC.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: My cluster can't activate after restart

2018-11-08 Thread aealexsandrov
Hi,

What do you mean when say that

1)JDBC thin client isn't worked
2)cluster block in activate stage

How you activate your cluster? Are you use the persistence?

To get more logs try to add next option to java options of Ignite:

-DIGNITE_QUIET=false

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about baseline topology.

2018-10-26 Thread aealexsandrov
Hi,

Do you set backups in your cache configuration? By default in case if you
have 2 data nodes without backups then you can face data loss when some of
your nodes will go down. In this case, you may see not all data.

But when this node will start (with persistence) and join to topology then
data from it will be available again (after rebalance).

Also, you could read about:

https://apacheignite.readme.io/docs/partition-loss-policies

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache stats like number of cache hits, misses, reads, write, avg

2018-10-25 Thread aealexsandrov
Hi,

Try to set setMetrcis for both regionCfg and storageCfg too. Is it help?

BR,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache stats like number of cache hits, misses, reads, write, avg

2018-10-24 Thread aealexsandrov
Hi,

I tested that it works. I attached my code for the server node and
configuration for the visor.

Could you please check it?

startActivate.java
  
default-config.xml
  

Run main class from startActivate.java and  visor cmd with provided config.
As a result I see:

visor> cache -c=cacheNam -a
Time of the snapshot: 2018-10-24 18:03:38
++
|Name(@)|Mode | Nodes |   Entries (Heap / Off-heap)   |   
Hits |  Misses   |Reads|Writes|
++
| cacheNam(@c0) | PARTITIONED | 1 | min: 1000 (0 / 1000)  | min:
672| min: 0| min: 672| min: 1000|
|   | |   | avg: 1000.00 (0.00 / 1000.00) | avg:
672.00 | avg: 0.00 | avg: 672.00 | avg: 1000.00 |
|   | |   | max: 1000 (0 / 1000)  | max:
672| max: 0| max: 672| max: 1000|
++

Cache 'cacheNam(@c0)':
+-+
| Name(@) | cacheNam(@c0) |
| Nodes   | 1 |
| Total size Min/Avg/Max  | 1000 / 1000.00 / 1000 |
|   Heap size Min/Avg/Max | 0 / 0.00 / 0  |
|   Off-heap size Min/Avg/Max | 1000 / 1000.00 / 1000 |
+-+

Nodes for: cacheNam(@c0)
++
| Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time|   
 
Size | Hi/Mi/Rd/Wr |
++
| 543734D6(@n0), 10.0.75.1 | 4| 1.89 %| 0.00 %   | 00:11:32.856 |
Total: 1000  | Hi: 672 |
|  |  |   |  |  |  
Heap: 0| Mi: 0   |
|  |  |   |  |  |  
Off-Heap: 1000 | Rd: 672 |
|  |  |   |  |  |  
Off-Heap Memory: 0 | Wr: 1000|
++
'Hi' - Number of cache hits.
'Mi' - Number of cache misses.
'Rd' - number of cache reads.
'Wr' - Number of cache writes.

Aggregated queries metrics:
  Minimum execution time: 00:00:00.000
  Maximum execution time: 00:00:00.000
  Average execution time: 00:00:00.000
  Total number of executions: 0
  Total number of failures:   0

Please check it and if it will work then try to check your config (or attach
it as a XML file because you provide only part of it).

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache stats like number of cache hits, misses, reads, write, avg

2018-10-24 Thread aealexsandrov
Hi,

Could you please provide the full configuration of the node?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Sql execution with partitionId

2018-10-19 Thread aealexsandrov
Hi,

It should work as described in the documentation. If the node with primary
partitions will fail then the node with backups will be used. If there are
no backups then an exception will be thrown.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding entry to partition that is concurrently evicted

2018-10-19 Thread aealexsandrov
Hi,

This exception shows that one of your partitions was moved to the invalid
state. It's not expected case and for investigation of the reason, we
require log file too. Is it possible to provide it?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Adding entry to partition that is concurrently evicted

2018-10-18 Thread aealexsandrov
Hi,

Could you please attach the configuration of the cluster and full log?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Engine

2018-10-17 Thread aealexsandrov
Hi Igor,

I think that optimizationd for Ignite better to discuss it on the Apache
Ignite Developer list:

http://apache-ignite-developers.2346864.n4.nabble.com/

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Select Query Performance

2018-10-17 Thread aealexsandrov
Hi,

We can try to check can your configuration be improved but first of all
could you please provide some details:

1)How many data nodes you used.
2)Data node configuration
3)Cache configuration or DML Create Table command for your cache (table)

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to process selector key error

2018-10-17 Thread aealexsandrov
Hi,

Your client was disconnected from the server. It could be because of
different reasons:

1. Network problems on your client.
2. Very long GC pause on the server side.

I guess that you face first because you note that possible you had the
network problems. 

In this case, the server will not able to get metrics from the client node
for some time and close the connection. Data streamer operation will fail in
this case because the connection was closed.

However, you can try to set up failure detection as it described here is
related section:

https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite complains for low heap memory

2018-10-17 Thread aealexsandrov
Hi,

The heap metrics that you see in topology message shows the max heap value
that your cluster can use:

Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum()

Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM
will be started with Xms amount of memory and will be able to use a maximum
of Xmx amount of memory.

Looks like Ignite has the recommendation to set Xms to at least 512mb. 
About JVM tunning you can read here:

https://apacheignite.readme.io/docs/jvm-and-system-tuning

BR,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use BinaryObject Cache API to query a table created from JDBC?

2018-10-12 Thread aealexsandrov
Hi,

Try the next example and follow the same way for your:

// Open the JDBC connection.
try {
Connection conn =
DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1;user=user;password=password");
//remove user password if you don't have security

conn.createStatement().execute("CREATE TABLE IF NOT EXISTS
Person(\n" +
"  id int,\n" +
"  city_id int,\n" +
"  name varchar,\n" +
"  age int, \n" +
"  company varchar,\n" +
"  PRIMARY KEY (id, city_id)\n" +
") WITH
\"template=partitioned,backups=1,affinity_key=city_id, key_type=PersonKey,
value_type=Person\";"); //IMPORTANT add key_type and value_type

conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (1, 'John Doe1', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (2, 'John Doe2', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (3, 'John Doe3', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (4, 'John Doe4', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (5, 'John Doe5', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (6, 'John Doe6', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (7, 'John Doe7', 3);");

conn.close();

BinaryObject key = ignite.binary().builder("PersonKey") //same
as in key type
.setField("id", 1)
.setField("city_id", 3)
.build();

IgniteCache cache1 =
ignite.getOrCreateCache("SQL_PUBLIC_PERSON").withKeepBinary();

BinaryObject value = cache1.get(key);

System.out.println("Name is " + value.field("name"));
}
catch (SQLException e) {
e.printStackTrace();

Output is:

Name is John Doe1

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: client probe cache metadata

2018-10-11 Thread aealexsandrov
Hi,

You can get all caches configuration from Java API:

IgniteCache cache =
dr1Hub.getOrCreateCache("CACHE");

CacheConfiguration conf =
cache.getConfiguration(CacheConfiguration.class);

   
System.out.println("");
System.out.println("Cache name " + conf.getName());
System.out.println("Atomicity mode " + conf.getAtomicityMode());
System.out.println("Backups " + conf.getBackups());
   
System.out.println("");

Output:


Cache name CACHE
Atomicity mode TRANSACTIONAL
Backups 1


Also possible to get it from Mbeans:

1)Start jconsole
2)choose Mbeans tab
3)org.apache->id->gridname->cache_groups->cache_name->attributes

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite TcpDiscoveryS3IpFinder and AWS VPC

2018-10-11 Thread aealexsandrov
Hi,

According '127.0.0.1#47500', '0:0:0:0:0:0:0:1/0#47500':

It's just the different views of the same address (localhost). For example,
Ignite started on 10.0.75.1 could be available using next addresses from the
same node:

>>> Local node addresses: [LAPTOP-I5CE4BEI/0:0:0:0:0:0:0:1,
>>> LAPTOP-I5CE4BEI.mshome.net/10.0.75.1, LAPTOP-I5CE4BEI/127.0.0.1,
>>> LAPTOP-I5CE4BEI.gridgain.local/172.18.24.193, /172.25.4.187,
>>> /192.168.56.1]

0:0:0:0:0:0:0:1/0 is a CIDR notation. You can read more about it here:

https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation

According to VPC Peering and Ignite. I don't think that this scenario was
tested. However, you can try to follow common suggestion about how it should
be:

https://docs.aws.amazon.com/vpc/latest/peering/peering-scenarios.html

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-20 Thread aealexsandrov
Hi,

Can you provide the code and configuration? Next one works okay for me:

//start the server
Ignite ignite = Ignition.start("server_config.xml");

//activate the cluster
ignite.cluster().active();

//create the spark session with daemon client node inside
IgniteSparkSession igniteSparkSession = IgniteSparkSession.builder()
.appName("example")
.master("local")
.config("spark.executor.instances", "2")
.igniteConfig("client_config.xml")
.getOrCreate();

System.out.println("spark start--");

igniteSparkSession.sql("SELECT * FROM DdaRecord");

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-19 Thread aealexsandrov
Hi,

I am not sure that it will work but you can try next:

SparkSession spark = SparkSession
.builder()
.appName("SomeAppName")
.master("spark://10.0.75.1:7077")
.config(OPTION_DISABLE_SPARK_SQL_OPTIMIZATION, "false") //or
true
.getOrCreate();

JavaSparkContext sparkContext = new
JavaSparkContext(spark.sparkContext());

JavaIgniteRDD igniteRdd1 = igniteContext.fromCache("CACHE1");

//here Ignite sql processor will be used because inside
SqlFieldsQuery
Dataset ds1 = igniteRdd1.sql("select * from CACHE1");
Dataset ds2 = igniteRdd1.sql("select * from CACHE2");

//here spark sql processor will be used
ds1.join(ds2).where();

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


  1   2   >