Hi,
I have written a simple code to find out the partition id for a data key.
Please check following code.
When I run the code keeping assetGroupId constant for different ids, I get
the same partition Id.
As you can see the for loop runs for 1000 million times changing 'id' in
every step and I
Hi,
I am using ignite datastreamer to pull the data from a kafka topic.
THe data is getting loaded to the ignite cache, but it is not getting
written to the 3rd party persistance (mysql db).
I have set the cacheStoreFactory to my CustomClass which has extended
CacheStoreAdapter class.
Code
Hi Gaurav
Decoupling file reading and cache streaming requires kind of a messaging
layer in between right. Initially I was thinking since its a bulk activity
we will be doing, I did not want to have additional memory and system
resources consumed by the introduction of messaging layer.
But the
In my case, config of application will be1. 3 Nodes with 24GB RAM and up to
1TB of disk data 2. Ignite is embedded in Java web application server 3.
Azul Zing JVM with on heap Ignite cache of 16GB4. 100mbps network speed or
better5. each node will have to serve at least 10K req/sec. each request
There are some objects, in relation to one to many (for simplify).
For example, there is a list with field descriptions (field type, range of
possible values, etc.) and items of this list.
It is necessary to somehow fold item correctly into the cache and get it
from there when filtering on the
Hi
I am using Ignite 2.3
Have 2 tables
Table 1: Customer - primary Key is PartyId
Table 2: Account - primary key is AccountID (also has PartyID as one of the
column)
To keep both customer and account data for a customer on the same node I
need to use affinity key for Account table. And
We have one Main CacheA is partition cache, another is local cache CacheB, we
write back to the main cache periodically from local CacheB .
The Main CacheA object include those fields:
fieldA,
fieldB,
fieldC,
fieldD
Local cache CacheB
fieldA
fieldC
fieldD
fieldE
And update
Thanks Alexey.
We are in the middle of the development, we may go live in next 3 to 4
months.
If we are done with Copy command by then, we are good.
When are we going to release 2.5 ??
Thanks
Naveen
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I am trying to perform a simple insert to an Ignite cache (version 2.2) from
a Spark application, using the code below (in scala):
val addresses=new util.ArrayList[String]()
addresses.add("127.0.0.1:48500..48520")
// IGNITE CONTEXT CONFIGURATIONS
val igniteContext:IgniteContext=new
Thank Stan for your quick response.
Actually I am looking for help on how to specify following details in xml
configuration file
1) SegmentationPolicy other that default value
2) SegmentCheckFrequency
3) SegmentationResolveAttempts
and other required setting in the configuration file.
Thanks
It looks like you have some issues with your maven installation. Make sure
you're able to build some simple projects, e.g. go over the maven guide
https://maven.apache.org/guides/getting-started/.
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
You need to set IgniteDataStreamer.allowOverwrite(true), as javadoc
says: Note that when this flag is {@code false}, updates will not be
propagated to the cache store
* (i.e. {@link #skipStore()} flag will be set to {@code true} implicitly).
Evgenii
2018-03-12 11:34 GMT+03:00 vbm
Hi,
It seems to me that there are two questions in this post:
1) How to store cross-referencing data in an IgniteCache?
2) How to extract data from server based on some complex filter?
First, on the cross-references. To store a reference to another entity,
which is stored as a separate entry in
I have set the overwrite flag to true.
stmr.allowOverwrite(true);
What is the significance of skipStore flag ?
What is the flow for an entry from to setTupleMultipleExtractore to reach
the cache ?
I am thinking it should go through write method with which it gets put to
the cache. I have
Hi,
Your code for getting a partition ID is correct.
Ignite can't use your "id" field to calculate partition - it just doesn't
know anything about it. It doesn't depend on the name, so calling it "id"
doesn't help, and it doesn't depend on the backing Oracle DB, so it doesn't
know that "id" is
Hi,
So, in short, you want to have some data structure basically separate from
Ignite that would just be using Ignite's SQL engine. Is that right? If so,
LOCAL caches sound like the closest what Ignite has for this. I'm not sure
if it would be much simpler if it didn't use Ignite's
Igniters,
Thought I’d share a couple of Apache Ignite talks happening this week.
Apache Ignite evangelist Akmal Chaudhri is in Portland, Oregon today for the
OpenIoTSummit North America conference. The symposium, which runs through
Wednesday, is designed for developers and architects working
> Curious to know How fast is Disk KV persistence ? since Ignite iterates
over all keys and indexes to do the computation. Is Disk KV persistence is
as efficient as in other stable NoSQL database like Cassandra ?
> Does the number of partitions helps in better key lookup access from Disk
> ?
>
Generally, you should have an index for each combination of fields you use in
a query. Primary key gives you an implicit index, but you need to create the
rest yourself.
In your case, I'd suggest to have AccountID as a primary key (without
PartyID) and also create a compound index for the pair
Hi Stan,
Thank you for the clarification. Now it is clear that partition id is
always selected based on a field annotated with @Affinitykeymapped
annotations.
Now the question is how does ignite locate the data bucket in a given
partition?
Does it use all the properties defined in key object and
Hello Igniters,
I wonder if anybody could advise on the following.
I am trying to implement simple extract, transform, load (ETL) process using
Apache Ignite 2.3 platform.
In extract phase local caches are loaded with some data and then passed to
transform phase where the SQL SELECT (sometimes
Ignite will take care of evicting the data from memory itself. The hot data
will be in the memory, and the rarely accessed data will be loaded on
demand. Other words, rarely accessed data will not occupy memory space all
the time.
Thanks,
Stan
--
Sent from:
Thanks Stanislav.
About the use case of certain tables to be Disk only is that:
1. OLAP Report queries on Tables for which access patterns are rare or time
bound and minimum e.g. creating management report from my web application
2. Materialized view tables which are created by listening to
Hi Stanislav,
Yes, ideally the simple map I can feed with pair that
could be accessed using H2 SQL.
Thank you for the address, I will try to post my question there.
regards,
zbyszek
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Currently expired entry is removed only from memory. The same will be
supported for persistence layer after this ticket is implemented:
https://issues.apache.org/jira/browse/IGNITE-5874.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Cache is based on hit or miss ratio or simple LRU
But as I mentioned in point #3 there might be services which hit multiple
times but good latency is not the requirement. I dont want cache to evict
any records when querying to such few Tables.
--
Sent from:
There may be some misunderstanding.
What I meant was that the data most likely WILL have good distribution with
assetGroupId being @AffinityMappedId. To have a good distribution it is
generally enough to have a lot of data groups, so that it is likely that
each partition stores more or less equal
Thank you very much for the responses.
Will keep an eye on when it will be released. Any estimated release for the
fix ?
One more question I asked was, can we provide a high watermark for
persistence store ?
On Mon, Mar 12, 2018 at 12:24 PM vkulichenko
wrote:
>
Thanks for reply.
What surprises me is that out of 6 nodes, 3 ignite node were started without
any issue but remaining three failed to restart with the error mentioned in
the subject, though all the six ignite nodes share same configuration.
On further taking a look at PersistentStore
> Cache is based on hit or miss ratio or simple LRU
LRU
> But as I mentioned in point #3 there might be services which hit multiple
> times but good latency is not the requirement. I dont want cache to evict
> any records when querying to such few Tables.
What you could do in this case is
Hello All,
In my project I am trying to use Ignite for the first time. I am trying to
store Protocol Buffer messages as value objects in my cache.
After spending a lot of time researching and experimenting, I've come to the
conclusion that Protobuf and Ignite SQL do not play very well. The only
Hi Naveen,
There is no scheduled date for it yet. 2.4 release has just been voted for
and accepted.
Based on the version history (https://ignite.apache.org/download.cgi) one
could say that the average time between releases is about 2-3 months, so
it's quite possible that 2.5 will be released in
We need a durable cache for storing blobs which are as follows 1. The blob
could be from 1 MB to 1 GB. 2. We do not have to index the blog document. 3.
The cache entry should be durable in case of node failure. So we need
replication and partitioning. 4. There should be a write-behind hook so that
33 matches
Mail list logo