Of course, it's not trivial... and changes on database are required (new
field on primary table (better) or new "extended partition table" 1to1
relationship with primary table (id primary table, partitionId)) but using
CacheStoreAdapter implementation it's not such as complex. I would do:
1. over
Manuel,
Good point!
But this *may* require some changes in database, that in general not always
possible.
And also it is not so trivial to update database with correct partition ID.
On Tue, Oct 18, 2016 at 3:22 PM, Manu wrote:
> Have you probe to partitioning your data? It’s pretty simple by
Have you probe to partitioning your data? It’s pretty simple by adding a
field (integer partitionId) on your table, so each node will load only its
own partitions. You could see an example here:
http://apacheignite.gridgain.org/docs/data-loading#section-partition-aware-data-loading
--
View this
Have you probe to partitioning your data? It’s pretty simple by adding a field
(integer partitionId) on your table, so each node will load only its own
partitions. You could see an example here:
http://apacheignite.gridgain.org/docs/data-loading
El 18 oct 2016, a las 9:06, Alexey Kuznetsov
ma
Alexey,
I see. Thank you very much.
From: Alexey Kuznetsov
Date: 2016-10-18 15:06
To: user@ignite.apache.org
Subject: Re: Can't increase the speed of loadCache() when increasing more
Ignite node
Bob,
In current Ignite implementation, if you are loading data via cache store each
Bob,
In current Ignite implementation, if you are loading data via cache store
each node will iterate whole data set
and take only those keys that will satisfy affinity function.
So,
in case of one node - all keys will loaded.
in case of two nodes: first node will iterate WHOLE data set, but w
Hi,
I load data into Ignite from oracle with loadCache().
I load 100w data, when Ignite cluster has one node, its cost time is 2m27s.
Two nodes, its cost time is 2m18s, three 2m15s, four 2m11s.
I have tested reading the 100w data through jdbc, its cost time is 40s.
Why don'