Re: Task failover in ignite

2018-09-28 Thread Prasad Bhalerao
Hi,

Ignite doc says "at least once guarantee".

If I sumbit the task using just
"ignite.compute().withNoFailover().affinityRun()",
 then ignite will try to execute this task on backup node if the primary
node goes down.

Does ignite immediately start rebalancing when a node goes down?

I am trying to understand how does ignite re-executes affinity task on
backup or new primary node when primary goes down?

Does ignite wait for rebalancing to complete and then resubmits the
affinity task to new primary node?

Or does ignite resubmits the task to backup node and waits for task to
complete then does the rebalancing?

In case of node failure does backup node becomes new primary for backup
partitions or it is decided after partition reexchange process?

How does it decides which node will become new primary for backup
partitions so that minimum data exchange will happen?

Thanks,
Prasad

On Sat, Sep 29, 2018, 2:31 AM vkulichenko 
wrote:

> Prasad,
>
> Since you're using withNoFailover(), failover will never happen and the
> task
> will just fail with an exception on client side if primary nodes dies. It's
> up to your code to retry in this case.
>
> When you retry, the task will be mapped to the new primary, which is former
> backup and therefore has all the data. No need to wait for rebalancing.
>
> In general, affinityRun/Call guarantees that all data is available locally
> during task execution. If that's not possible for any reason, an exception
> is thrown.
>
> -Val
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: .NET ContinuousQuery lose cache entries

2018-09-28 Thread Alew

Hi, attached a reproducer.
Turn off logs to fix the issue. Slow log is not the only reason. More 
nodes in a cluster lead to the same behaviour.

Who is responsible for the behavior? Is it .net, java, bad docs or me?

On 24/09/2018 20:03, Alew wrote:

Hi!

I need a way to consistently get all entries in a replicated cache and 
then all updates for them while application is working.


I use ContinuousQuery for it.

var cursor = cache.QueryContinuous(new ContinuousQuerybyte[]>(new CacheListener(), true),

    new ScanQuery()).GetInitialQueryCursor();

But I have some issues with it.

Sometimes cursor returns only part of entries in a cache and cache 
listener does not return them either.


Sometimes cursor and cache listener return the same entry both.

Issue somehow related to amount of work the nodes have to do and 
amount of time between start of the publisher node and subscriber node.


There are more problems if nodes start at the same time.

Is there a reliable way to do it without controling order of node 
start and pauses between them?





<>


Re: Ignite Query Slow

2018-09-28 Thread Andrey Mashenkov
Please take a look at this.

https://apacheignite.readme.io/v2.6/docs/indexes#section-queryentity-based-configuration

29 сент. 2018 г. 3:41 пользователь "Skollur"  написал:

Thank you for suggestion. Can you give example how to create secondary
indices?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ingite Join returns wrong records

2018-09-28 Thread Andrey Mashenkov
Hi,

Try to use qry.setDistributedJoins(true). This should always return correct
result.

However, it has poor performance as due to intensive data exchange between
nodes.

By default, Ignite join only the data available locally on each node. Try
to collocate your data to get better performance with non distributed joins.

сб, 29 сент. 2018 г., 3:39 Skollur :

> I am using Apache Ignite 2.6 version. I have two tables as below i.e
> SUMMARY
> and SEQUENCE
>
> SUMMARY-> DW_Id bigint (Primary key) , Sumamry_Number varchar, Account_Type
> varchar
> SEQUENCE-> DW_Id bigint (Primary key) , Account_Type varchar
>
> Database and cache has same number of records in both tables. Database JOIN
> query returns 1500 counts/records and However IGNITE JOIN returns only 4
> counts and 4 records. Ignite cache is build based on auto generated web
> console. Query used is as below. There is no key involved while joining two
> cache tables here from two cache.tables. This is simple join based on
> value(i.e account type - string). How to get correct value for JOIN in
> Ignite?
>
> SELECT COUNT(*) FROM SUMMARY LIQ
> INNER JOIN SEQUENCE CPS ON
> LIQ.Account_Type = CPS.Account_Type
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Query Slow

2018-09-28 Thread Skollur
Thank you for suggestion. Can you give example how to create secondary
indices?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ingite Join returns wrong records

2018-09-28 Thread Skollur
I am using Apache Ignite 2.6 version. I have two tables as below i.e SUMMARY
and SEQUENCE

SUMMARY-> DW_Id bigint (Primary key) , Sumamry_Number varchar, Account_Type
varchar
SEQUENCE-> DW_Id bigint (Primary key) , Account_Type varchar

Database and cache has same number of records in both tables. Database JOIN
query returns 1500 counts/records and However IGNITE JOIN returns only 4
counts and 4 records. Ignite cache is build based on auto generated web
console. Query used is as below. There is no key involved while joining two
cache tables here from two cache.tables. This is simple join based on
value(i.e account type - string). How to get correct value for JOIN in
Ignite?

SELECT COUNT(*) FROM SUMMARY LIQ 
INNER JOIN SEQUENCE CPS ON
LIQ.Account_Type = CPS.Account_Type




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Query Slow

2018-09-28 Thread Andrey Mashenkov
Hi,

Please, try to create secondary indices on join columns, otherwise query
will fall into full table scan.

Then if you will see SCANs, as next step, you can try to rewrite your query
with different tables join order. Sometimes, underline H2 can change join
order to non optimal. In that case qry.setEnforceJoinOrder(true) may be
helpful.

Looks like there should be a single lookup on ID column, and 2 index scans
for joining.

пт, 28 сент. 2018 г., 19:02 Skollur :

> Here is the explain query
>
> #   PLAN
> 1   "SELECT
> ADDR__Z2.ADDRESS_LINE_1 AS __C0_0,
> ADDR__Z2.ADDRESS_LINE_2 AS __C0_1,
> ADDR__Z2.ADDRESS_LINE_3 AS __C0_2,
> ADDR__Z2.STREET AS __C0_3,
> ADDR__Z2.CITY AS __C0_4,
> ADDR__Z2.STATE AS __C0_5,
> ADDR__Z2.COUNTRY AS __C0_6,
> ADDR__Z2.ZIP_POSTAL AS __C0_7
> FROM "GroupAddressCache".GROUP_ADDRESS GA__Z1
> /* "GroupAddressCache".GROUP_ADDRESS.__SCAN_ */
> /* WHERE (GA__Z1.ADDRESS_TYPE = 'Mailing')
> AND (GA__Z1.RECORD_IS_VALID = 'Y')
> */
> INNER JOIN "GroupCache"."[GROUP]" GRP__Z0
> /* "GroupCache"."[GROUP]".__SCAN_ */
> ON 1=1
> /* WHERE (GRP__Z0.RECORD_IS_VALID = 'Y')
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
> AND (GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID))
> */
> INNER JOIN "AddressCache".ADDRESS ADDR__Z2
> /* "AddressCache"."_key_PK_proxy": DW_ID = GA__Z1.ADDRESS_ID */
> ON 1=1
> WHERE (GA__Z1.ADDRESS_ID = ADDR__Z2.DW_ID)
> AND ((GA__Z1.ADDRESS_TYPE = 'Mailing')
> AND ((GA__Z1.RECORD_IS_VALID = 'Y')
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID)
> AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
> AND (GRP__Z0.RECORD_IS_VALID = 'Y')"
> 2   "SELECT
> __C0_0 AS ADDRESS_LINE_1,
> __C0_1 AS ADDRESS_LINE_2,
> __C0_2 AS ADDRESS_LINE_3,
> __C0_3 AS STREET,
> __C0_4 AS CITY,
> __C0_5 AS STATE,
> __C0_6 AS COUNTRY,
> __C0_7 AS ZIP_POSTAL
> FROM PUBLIC.__T0
> /* PUBLIC."merge_scan" */"
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Cluster is not responsive after node segmentation and reconciliation

2018-09-28 Thread Ariel Tubaltsev
Apache Ignite 2.4A cluster of 3 in-memory nodes, REPLICATED, TRANSACTIONAL.-
All 3 nodes got segmented around the same time (Local node SEGMENTED)- After
reconciliation, all records are lost- Cluster starts to accumulate
transactions: (Pending transaction deadlock detection futures)At this point,
clients requ oom.rtf
  ests
won't be served any moreIt can also go to the state when a  oom.rtf
  node
can not join the grid: ERROR GridServiceProcessor:482 - Error when executing
service: null.Also, clients may end up with JVM OOM (log
attached).Questions:- is it a known issue? - Would be persistence help
here?- Any treatment for OOM?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Cluster is not responsive after node segmentation and reconciliation

2018-09-28 Thread Ariel Tubaltsev
Apache 2.4
A cluster of 3 in-memory nodes, REPLICATED, TRANSACTIONAL.

- All 3 nodes got segmented around the same time (Local node SEGMENTED)
- After reconciliation, all records are lost
- Cluster starts to accumulate transactions: (Pending transaction deadlock
detection futures)
At this point, clients requests won't be served any more

It can also go to the state when a  oom.rtf
  node
can not join the grid: ERROR GridServiceProcessor:482 - Error when executing
service: null.

Also, clients may end up with JVM OOM (log attached).

Questions:
- is it a known issue? 
- Would be persistence help here?
- Any treatment for OOM?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Task failover in ignite

2018-09-28 Thread vkulichenko
Prasad,

Since you're using withNoFailover(), failover will never happen and the task
will just fail with an exception on client side if primary nodes dies. It's
up to your code to retry in this case.

When you retry, the task will be mapped to the new primary, which is former
backup and therefore has all the data. No need to wait for rebalancing.

In general, affinityRun/Call guarantees that all data is available locally
during task execution. If that's not possible for any reason, an exception
is thrown.

-Val





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is ID generator split brain compliant?

2018-09-28 Thread abatra
To give more information on how I restart the server:

I run server and client inside their own respective docker container and I
issue 'docker stop' for server node container ID to stop and then restart
it.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


LRU Evicts Most Recent Entries

2018-09-28 Thread HEWA WIDANA GAMAGE, SUBASH
Hi everyone,
We use Ignite 1.9, three node cluster at the moment(partitioned cache with 1 
backup, and eviction max 500MB, cache expiry = 15minutes), and we can see 
following behavior.

Following [1] logs are from one JVM. Cluster started on 2018-09-26 2 AM. And 
this issue started happening predictably after couple of days.(we see this many 
time over the past months). And when recycled the JVM everything comes back 
normal.

Here you can see three cache puts happening few seconds apart. And If you 
compare time stamp, after every time the put happens, it's followed by an 
eviction. We can see the same pattern of logs for every cache key coming in to 
the system. Meaning when the second request comes in few seconds later, it's 
been evicted and cache miss.

The only cache operations we use entirely is Put, get, contains and remove.

For the whole time from server startup there wasn't any errors from ignite 
side. Not even a timeout from a tcp com spi.

Only significant thing I could find was, few hours before this behavior (which 
is eventually the cache hit rates going down because of this eviction 
behavior), taken place, there was 36,000 "CACHE_ENTRY_EVICTED" events fired 
within one second. (2018-09-27 11:51:18 to 2018-09-27 11:51:19). And  4084  and 
  4056 are the cache sizes one minute before and after above scenario as per 
ignite metrics.  Looks very suspicious to me, but could not find anything to 
relate to this. No error logs, heap, cpu is fine all the time.


[1]

CACHE_OBJECT_PUT
2018-09-28 12:48:41,844Event received: CacheEvent 
[cacheName=mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=GridCacheVersion [topVer=149325845, time=3076306530984, 
order=1537944629823, nodeOrder=13], newVal=com.main.CacheableResponse 
@47386d2b, oldVal=null, hasOldVal=false, hasNewVal=true, near=false, 
subjId=37f25eae-79dd-4de7-90a6-0ee34a0f5c9c, cloClsName=null, taskName=null, 
nodeId8=cc93a6bb, evtNodeId8=4d018b28, msg=Cache event., type=CACHE_OBJECT_PUT, 
tstamp=1538153321836]

2018-09-28 12:48:43,643Event received: CacheEvent 
[cacheName=mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=GridCacheVersion [topVer=149325845, time=3076306532784, 
order=1537944629892, nodeOrder=13], newVal= com.main.CacheableResponse 
@28bf326c, oldVal=null, hasOldVal=false, hasNewVal=true, near=false, 
subjId=4d018b28-5e69-49f7-9d5e-a0c3c098b71b, cloClsName=null, taskName=null, 
nodeId8=cc93a6bb, evtNodeId8=4d018b28, msg=Cache event., type=CACHE_OBJECT_PUT, 
tstamp=1538153323637]

2018-09-28 12:48:59,290Event received: CacheEvent 
[cacheName=mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=GridCacheVersion [topVer=149325845, time=3076306668429, 
order=1537944630394, nodeOrder=13], newVal= com.main.CacheableResponse 
@154a3c61, oldVal=null, hasOldVal=false, hasNewVal=true, near=false, 
subjId=cc93a6bb-0736-423b-92e9-91ae7883ec93, cloClsName=null, taskName=null, 
nodeId8=cc93a6bb, evtNodeId8=4d018b28, msg=Cache event., type=CACHE_OBJECT_PUT, 
tstamp=1538153339280]

CACHE_ENTRY_EVICTED
2018-09-28 12:48:42,177Event received: CacheEvent 
[cacheName=mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=null, newVal=null, oldVal= com.main.CacheableResponse@717a7062, 
hasOldVal=true, hasNewVal=false, near=false, subjId=null, cloClsName=null, 
taskName=null, nodeId8=cc93a6bb, evtNodeId8=cc93a6bb, msg=Cache event., 
type=CACHE_ENTRY_EVICTED, tstamp=1538153322171]

2018-09-28 12:48:53,202Event received: CacheEvent 
[cacheName=mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=null, newVal=null, oldVal= com.main.CacheableResponse @46434293, 
hasOldVal=true, hasNewVal=false, near=false, subjId=null, cloClsName=null, 
taskName=null, nodeId8=cc93a6bb, evtNodeId8=cc93a6bb, msg=Cache event., 
type=CACHE_ENTRY_EVICTED, tstamp=153815200]

2018-09-28 12:49:00,552Event received: CacheEvent [cacheName= 
mycache, part=214, 
key=06cda95e28ba8266e1f2d94cfbbcbeb308cccd1904fcac4f54a1d48a0e224752, xid=null, 
lockId=null, newVal=null, oldVal= com.main.CacheableResponse @51d4e8e4, 
hasOldVal=true, hasNewVal=false, near=false, subjId=null, cloClsName=null, 
taskName=null, nodeId8=cc93a6bb, evtNodeId8=cc93a6bb, msg=Cache event., 
type=CACHE_ENTRY_EVICTED, tstamp=1538153340544]






Re: Setting performance expectations

2018-09-28 Thread Gaurav Bajaj
It means ehCache is better than ignite performance atleast in LOCAL mode  :)

On 22-Sep-2018 3:33 AM, "Daryl Stultz"  wrote:

> I've discovered that "partitioned" is the default Cache Mode. I set it to
> "local". Things run faster now. Still not as fast as expected.
>
> 2018-09-21 20:54:41.733 DEBUG - c.o.i.IgniteSpec : load 428765 rows into
> local map in 976 ms
> 2018-09-21 20:54:43.562 DEBUG - c.o.i.IgniteSpec : putAll map to cache in
> 1829 ms
> 2018-09-21 20:54:44.031 DEBUG - c.o.i.IgniteSpec : get 428765 elements one
> at a time in 469 ms
> 2018-09-21 20:54:44.313 DEBUG - c.o.i.IgniteSpec : get all keys at once in
> 282 ms
>
> Here are metrics for the same test run with Ehcache. How can I get these
> same numbers from Ignite?
>
> 2018-09-21 21:28:37.273 DEBUG - c.o.i.EhcacheSpec : load 428765 rows into
> local map in 988 ms
> 2018-09-21 21:28:38.299 DEBUG - c.o.i.EhcacheSpec : putAll map to cache in
> 1025 ms
> 2018-09-21 21:28:38.326 DEBUG - c.o.i.EhcacheSpec :* get 428765 elements
> one at a time in 27 ms*
> 2018-09-21 21:28:38.536 DEBUG - c.o.i.EhcacheSpec : get all keys at once
> in 210 ms
>
> /Daryl
>


Re: java.lang.IllegalArgumentException: Can not set final

2018-09-28 Thread smurphy
Hi Ilya,

I don't think the errors are caused by the binary objects, but by having
custom objects in the EntryProcessor. Checkout the attached
FragmentAssignToScannerEntryProcessor.java. Instead of injecting the
BinaryFragmentConverter into FragmentAssignToScannerEntryProcessor, I moved
all the binary object code from BinaryFragmentConverter directly into
FragmentAssignToScannerEntryProcessor. 
This works fine.

Then I created the static FragmentAssignToScannerEntryProcessor.Tester
class, which has no binary objects, and added it as a member variable of
FragmentAssignToScannerEntryProcessor and that resulted in the same
IllegalArgumentException as before: "failed to read field [name=tester]"

FragmentAssignToScannerEntryProcessor.java

  

stackTrace.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unreasonable segmentation in Kubernetes on one node reboot

2018-09-28 Thread artem_zin
Please ignore ports, I don't think it matters in the end if connections with
these random ports established as client sockets, as long as exposed server
sockets use fixed ports.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is ID generator split brain compliant?

2018-09-28 Thread abatra
I tried debugging it a bit. 

Now, I am running only 1 server node. I am running other node as client.
Even with single server node, the atomic sequence is reset to initial value
after the server node restarts.

I checked the IGNITE_HOME folder, on every restart, the contents are
recreated with a different node ID.

node00-46580d24-2fdb-484a-ba63-3a05248c0b8d

*ID after restart*
node00-beae3aab-6165-485a-96e3-1a2fe5b7f20b

I believe this might be the reason for the sequence to not persist across
reboots. However, I am unsure why am I running into this because I am pretty
sure that I am running only 1 server from 1 node.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Reading from the SQLServer persistence store with write-behind setting

2018-09-28 Thread ilya.kasnacheev
Hello!

In order to use loadCaches you will have to define types in jdbc cache
store, since it obviously needs to know structure of your data.

You can find example of that in Ignite documentation:
https://apacheignite.readme.io/docs/3rd-party-store#section-manual

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Query Slow

2018-09-28 Thread Skollur
Here is the explain query

#   PLAN
1   "SELECT
ADDR__Z2.ADDRESS_LINE_1 AS __C0_0,
ADDR__Z2.ADDRESS_LINE_2 AS __C0_1,
ADDR__Z2.ADDRESS_LINE_3 AS __C0_2,
ADDR__Z2.STREET AS __C0_3,
ADDR__Z2.CITY AS __C0_4,
ADDR__Z2.STATE AS __C0_5,
ADDR__Z2.COUNTRY AS __C0_6,
ADDR__Z2.ZIP_POSTAL AS __C0_7
FROM "GroupAddressCache".GROUP_ADDRESS GA__Z1
/* "GroupAddressCache".GROUP_ADDRESS.__SCAN_ */
/* WHERE (GA__Z1.ADDRESS_TYPE = 'Mailing')
AND (GA__Z1.RECORD_IS_VALID = 'Y')
*/
INNER JOIN "GroupCache"."[GROUP]" GRP__Z0
/* "GroupCache"."[GROUP]".__SCAN_ */
ON 1=1
/* WHERE (GRP__Z0.RECORD_IS_VALID = 'Y')
AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
AND (GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID))
*/
INNER JOIN "AddressCache".ADDRESS ADDR__Z2
/* "AddressCache"."_key_PK_proxy": DW_ID = GA__Z1.ADDRESS_ID */
ON 1=1
WHERE (GA__Z1.ADDRESS_ID = ADDR__Z2.DW_ID)
AND ((GA__Z1.ADDRESS_TYPE = 'Mailing')
AND ((GA__Z1.RECORD_IS_VALID = 'Y')
AND ((GRP__Z0.GROUP_CUSTOMER_ID = GA__Z1.GROUP_CUSTOMER_ID)
AND ((GRP__Z0.GROUP_CUSTOMER_ID = 44)
AND (GRP__Z0.RECORD_IS_VALID = 'Y')"
2   "SELECT
__C0_0 AS ADDRESS_LINE_1,
__C0_1 AS ADDRESS_LINE_2,
__C0_2 AS ADDRESS_LINE_3,
__C0_3 AS STREET,
__C0_4 AS CITY,
__C0_5 AS STATE,
__C0_6 AS COUNTRY,
__C0_7 AS ZIP_POSTAL
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */"



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-28 Thread Serg
Hi,  

I have changed my data model and the problem is gone.

But look like I should be care with data which I upload and will be nice to
know which data I can use, maybe I missed smth in docs about data
preparation?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-28 Thread ilya.kasnacheev
Hello!

It is still a problem for you? I can see slight decrease of fill factor on
your chart coupled with slight increase of data region usage. With regards
to fragmentation it is to be expected.

I have tried your test:
15:04:54,356 INFO  [grid-timeout-worker-#23] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=14b5701b, uptime=00:01:00.014]
^-- PageMemory [pages=6157]
^-- Heap [used=105MB, free=69.72%, comm=350MB]
^-- Non heap [used=62MB, free=-1%, comm=65MB]

Then, ten minutes later:

15:15:54,402 INFO  [grid-timeout-worker-#23] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=14b5701b, uptime=00:12:00.058]
^-- PageMemory [pages=6212]
^-- Heap [used=230MB, free=34.11%, comm=350MB]
^-- Non heap [used=67MB, free=-1%, comm=68MB]

Regards,




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Task failover in ignite

2018-09-28 Thread Prasad Bhalerao
Hi,

I have created multiple ignite runnable task by extending IgniteRunnable
nad IgniteCallable interface. I submitting these tasks to the primary data
node using "ignite.compute().withNoFailover().affinityRun()" method.

I have not set max failover attempt, so I think default max failover
attempt will be 5.
I have kept the back count as 1.

What happens when the primary node executing this task goes down?

In this case does ignite move this task to the backup node and execute it
on back node?

When the node goes down ignite starts rebalancing the cluster. In this case
how this task will be executed?

Does ignite waits for rebalancing process to complete and then execute this
task?

Can some one please explain this in detail?



Thanks,
Prasad


Re: Problems are enabling Ignite Persistence

2018-09-28 Thread Lokesh Sharma
Hi

Thanks for looking into this.


> First, I had to change h2 version dependency from 1.4.197 to 1.4.195 -
> maybe
> this is because I have later snapshot of Apache Ignite compiled.


The version is already 1.4.195.

Second, I had to activate the cluster! You have to activate persistent
> cluster after you bring all nodes up. Just add ignite.active(true); on
> IgniteConfig.java:92


After, I added "ignite.active(true)" it is working for me. Although, when I
remove this line on second run of the app, the original problem persists.
Shouldn't the cluster be automatically activated in the second run as said
in the documentation?



On Fri, Sep 28, 2018 at 6:55 PM ilya.kasnacheev 
wrote:

> Hello!
>
> Seems to be working for me after two changes.
>
> First, I had to change h2 version dependency from 1.4.197 to 1.4.195 -
> maybe
> this is because I have later snapshot of Apache Ignite compiled.
>
> Second, I had to activate the cluster! You have to activate persistent
> cluster after you bring all nodes up. Just add ignite.active(true); on
> IgniteConfig.java:92
>
> The error message about activation requirement was clear and I still could
> exist app with Ctrl-C. Maybe you have extra problems if you are on some
> unstable commit of master branch.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-28 Thread Ray
Actually there's only one row in b.

SELECT COUNT(*) FROM b where x = '1';
COUNT(*)  1

1 row selected (0.003 seconds)

Maybe because the join performance drops dramatically when the data size is
more than 10 million or cluster has a lot of clients connected?
My 6 node cluster has 10 clients connected to it and some of them has slow
network connectivity.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-28 Thread ilya.kasnacheev
Hello!

I have indeed try a use case like yours:

0: jdbc:ignite:thin://127.0.0.1/> create index on b(x,y); 
No rows affected (9,729 seconds)
0: jdbc:ignite:thin://127.0.0.1/> select count(*) from a;
COUNT(*)  1

1 row selected (0,017 seconds)
0: jdbc:ignite:thin://127.0.0.1/> select count(*) from b;
COUNT(*)  4194304

1 row selected (0,024 seconds)
0: jdbc:ignite:thin://127.0.0.1/> select a.x,a.y from a join b where a.y =
b.y and a.x = b.x; 
X  1
Y  1

1 row selected (0,005 seconds)
0: jdbc:ignite:thin://127.0.0.1/> explain select a.x,a.y from a join b where
a.y = b.y and a.x = b.x;
PLAN  SELECT
__Z0.X AS __C0_0,
__Z0.Y AS __C0_1
FROM PUBLIC.A __Z0
/* PUBLIC.A.__SCAN_ */
INNER JOIN PUBLIC.B __Z1
/* PUBLIC."b_x_asc_y_asc_idx": Y = __Z0.Y
AND X = __Z0.X
 */
ON 1=1
WHERE (__Z0.Y = __Z1.Y)
AND (__Z0.X = __Z1.X)

PLAN  SELECT
__C0_0 AS X,
__C0_1 AS Y
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */

2 rows selected (0,007 seconds)
^ very fast, compared to 1,598 seconds before index was created

My standing idea is that you have very low selectivity on b.x. I.e. if 10
million out of 14 million b rows will have x = 1, then index will not be
able to help and will only hurt. Can you execute SELECT COUNT(*) FROM b
WHERE x = 1; on your dataset?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problems are enabling Ignite Persistence

2018-09-28 Thread ilya.kasnacheev
Hello!

Seems to be working for me after two changes.

First, I had to change h2 version dependency from 1.4.197 to 1.4.195 - maybe
this is because I have later snapshot of Apache Ignite compiled.

Second, I had to activate the cluster! You have to activate persistent
cluster after you bring all nodes up. Just add ignite.active(true); on
IgniteConfig.java:92

The error message about activation requirement was clear and I still could
exist app with Ctrl-C. Maybe you have extra problems if you are on some
unstable commit of master branch.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite .net programatically access cluster config (not working)

2018-09-28 Thread ilya.kasnacheev
Hello!

You can send compute tasks to other nodes to determine their data regions
using code.

You can also just take a look at Web Console to see if it suits your needs:
https://apacheignite-tools.readme.io/docs/ignite-web-console

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is ID generator split brain compliant?

2018-09-28 Thread Pavel Pereslegin
Ankit,

 > 1. I run two nodes in the cluster.
I checked 2 nodes cluster restart (simultaneously, 2 processes on same
machine) with the same result - after restart the "current" value !=
"initial" value.

 > 2. I restart the entire node and hence the server gets restarted.
I gave this example only for clarity - in this case it does not matter
how you stop the server. The main thing is that the persistence
directory should not be cleared, but since you mentioned that the data
in the cache is available after restart - I believe it is not cleaned

 > Otherwise, I am not sure how is that working out for you.
Please, try run code by yourself and check results.

 > I see that you are stressing on "cleaned persistence directory".
It's not necessary - I cleaned persistence directory just to see the
expected result.

Persistence should work fine for atomics "out of the box", otherwise
we should create an issue in ASF jira. But for the time being I can
not understand in what case this issue occurs.
пт, 28 сент. 2018 г. в 10:52, abatra :
>
> There are only two differences in the way you tried the example:
>
> 1. I run two nodes in the cluster.
> 2. I do not stop and configure the server while the JVM is running. I
> restart the entire node and hence the server gets restarted.
>
> Otherwise, I am not sure how is that working out for you.
>
> Also, I see that you are stressing on "cleaned persistence directory".
> Please let me know if I am missing something there.
>
> My server nodes run as two different docker containers on two different open
> stack server nodes.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Reading from the SQLServer persistence store with write-behind setting

2018-09-28 Thread michal23849
Hi,
do I have to write any method lor cache.loadCache or it should be used as
is?

Is using loadCache an alternative to using the readThrough?
I tried to use both options and can't see any attempt during starting my
cache to even try loading the data from my SQL Server database. ReadThrough
doesn't seem to work at all. Though the write-behind works absolutely fine.

Unless I am missing any extra configuration. I expected that it will use my
XML cache mappings.

My setup is below:

















(...)




My Java code for starting cache:

private static IgniteCache myCache = null;

public void start() throws StorageException {
Ignite ignite = myCacheFactory.start();
if (myCache == null) {
synchronized(lock) {
if (myCache == null) {
myCache = 
ignite.getOrCreateCache("myCache");
myCache.loadCache(null);
}
}
}

How to trigger Ignite to actually try loading the data from the store
(CacheJdbcPojoStoreFactory using SQL Server JDBC driver).

Thanks
Michal




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is ID generator split brain compliant?

2018-09-28 Thread abatra
There are only two differences in the way you tried the example:

1. I run two nodes in the cluster.
2. I do not stop and configure the server while the JVM is running. I
restart the entire node and hence the server gets restarted.

Otherwise, I am not sure how is that working out for you.

Also, I see that you are stressing on "cleaned persistence directory".
Please let me know if I am missing something there.

My server nodes run as two different docker containers on two different open
stack server nodes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Unable to get the ignite cache metrics

2018-09-28 Thread V, Krupa (Nokia - IN/Bangalore)
I have used the parameter: 
ignite_org_apache_ignite_internal_processors_cache_cacheclustermetricsmxbeanimpl_averageputtime
 
I am still getting zero only.

-Original Message-
From: ezhuravlev  
Sent: Thursday, September 20, 2018 8:48 PM
To: user@ignite.apache.org
Subject: Re: Unable to get the ignite cache metrics

Hi,

Looks like you try to access local cache metrics on client node, which will be 
empty, because clients doesn't store cache. Try to use 
CacheClusterMetricsMXBeanImpl instead.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unreasonable segmentation in Kubernetes on one node reboot

2018-09-28 Thread artem_zin
Here is where random ports come from, both ServerImpl and ClientImpl ask
TcpDiscoverySpi to get socket, and it's implemented as `sock.bind(new
InetSocketAddress(locHost, 0));`

https://github.com/apache/ignite/blob/2.6.0/modules/core/src/main/java/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.java#L1471

I'm sure it's fine in general, because, well, that's how you get a client
socket, but I'm not sure if that plays well with Kubernetes given how strict
it is about container ports, I do however expect k8s to only require ingress
ports to be exposed explicitly



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-28 Thread Ray
Here's the detailed information for my join test.

0: jdbc:ignite:thin://sap-datanode6/> select * from a;
x  1
y  1
A   bearbrick

1 row selected (0.002 seconds)
0: jdbc:ignite:thin://sap-datanode6/> select count(*) from b;
COUNT(*)  14337959

1 row selected (0.299 seconds)
0: jdbc:ignite:thin://sap-datanode6/> select x,y from b where _key = '1';
x  1
y  1

1 row selected (0.002 seconds)


select a.x,a.y from a join b where a.x = b.x and a.y = b.y;
x  1
y  1

1 row selected (6.036 seconds)  -- Takes 6 seconds to join a table with one
row to 14 million row table using affinity key x

explain select a.x,a.y from a join b where a.x = b.x and a.y = b.y;

PLAN  SELECT
A__Z0.x AS __C0_0,
A__Z0.y AS __C0_1
FROM PUBLIC.B__Z1
/* PUBLIC.B.__SCAN_ */
INNER JOIN PUBLIC.T A__Z0
/* PUBLIC.AFFINITY_KEY: x = B__Z1.x */
ON 1=1
WHERE (A__Z0.y = B__Z1.y)
AND (A__Z0.x = B__Z1.x)

PLAN  SELECT
__C0_0 AS x,
__C0_1 AS y
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */

If I create a index on table b on field x and y, it takes 6.8 seconds to
finish join.

create index on b(x,y);
No rows affected (31.316 seconds)

0: jdbc:ignite:thin://sap-datanode6/> select a.x,a.y from a join b where a.y
= b.y and a.x = b.x;
x  1
y  1

1 row selected (6.865 seconds)

0: jdbc:ignite:thin://sap-datanode6/> explain select a.x,a.y from a join b
where a.y = b.y and a.x = b.x;
PLAN  SELECT
A__Z0.x AS __C0_0,
A__Z0.y AS __C0_1
FROM PUBLIC.T A__Z0
/* PUBLIC.T.__SCAN_ */
INNER JOIN PUBLIC.B__Z1
/* PUBLIC."b_x_asc_y_asc_idx": y = A__Z0.y
AND x = A__Z0.x
 */
ON 1=1
WHERE (A__Z0.y = B__Z1.y)
AND (A__Z0.x = B__Z1.x)

PLAN  SELECT
__C0_0 AS x,
__C0_1 AS y
FROM PUBLIC.__T0
/* PUBLIC."merge_scan" */

2 rows selected (0.003 seconds)

Here's my configuration


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>




























node1:49500
node2:49500
node3:49500
node4:49500
node5:49500
node6:49500
















--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/